# hierarchical_implicit_models_and_likelihoodfree_variational_inference__945411ad.pdf Hierarchical Implicit Models and Likelihood-Free Variational Inference Dustin Tran Columbia University Rajesh Ranganath Princeton University David M. Blei Columbia University Implicit probabilistic models are a flexible class of models defined by a simulation process for data. They form the basis for theories which encompass our understanding of the physical world. Despite this fundamental nature, the use of implicit models remains limited due to challenges in specifying complex latent structure in them, and in performing inferences in such models with large data sets. In this paper, we first introduce hierarchical implicit models (HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian modeling, thereby defining models via simulators of data with rich hidden structure. Next, we develop likelihood-free variational inference (LFVI), a scalable variational inference algorithm for HIMs. Key to LFVI is specifying a variational family that is also implicit. This matches the model s flexibility and allows for accurate approximation of the posterior. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation. 1 Introduction Consider a model of coin tosses. With probabilistic models, one typically posits a latent probability, and supposes each toss is a Bernoulli outcome given this probability [36, 15]. After observing a collection of coin tosses, Bayesian analysis lets us describe our inferences about the probability. However, we know from the laws of physics that the outcome of a coin toss is fully determined by its initial conditions (say, the impulse and angle of flip) [25, 9]. Therefore a coin toss randomness does not originate from a latent probability but in noisy initial parameters. This alternative model incorporates the physical system, better capturing the generative process. Furthermore the model is implicit, also known as a simulator: we can sample data from its generative process, but we may not have access to calculate its density [11, 20]. Coin tosses are simple, but they serve as a building block for complex implicit models. These models, which capture the laws and theories of real-world physical systems, pervade fields such as population genetics [40], statistical physics [1], and ecology [3]; they underlie structural equation models in economics and causality [39]; and they connect deeply to generative adversarial networks (GANs) [18], which use neural networks to specify a flexible implicit density [35]. Unfortunately, implicit models, including GANs, have seen limited success outside specific domains. There are two reasons. First, it is unknown how to design implicit models for more general applications, exposing rich latent structure such as priors, hierarchies, and sequences. Second, existing methods for inferring latent structure in implicit models do not sufficiently scale to high-dimensional or large data sets. In this paper, we design a new class of implicit models and we develop a new algorithm for accurate and scalable inference. For modeling, 2 describes hierarchical implicit models, a class of Bayesian hierarchical models which only assume a process that generates samples. This class encompasses both simulators in the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. classical literature and those employed in GANs. For example, we specify a Bayesian GAN, where we place a prior on its parameters. The Bayesian perspective allows GANs to quantify uncertainty and improve data efficiency. We can also apply them to discrete data; this setting is not possible with traditional estimation algorithms for GANs [27]. For inference, 3 develops likelihood-free variational inference (LFVI), which combines variational inference with density ratio estimation [49, 35]. Variational inference posits a family of distributions over latent variables and then optimizes to find the member closest to the posterior [23]. Traditional approaches require a likelihood-based model and use crude approximations, employing a simple approximating family for fast computation. LFVI expands variational inference to implicit models and enables accurate variational approximations with implicit variational families: LFVI does not require the variational density to be tractable. Further, unlike previous Bayesian methods for implicit models, LFVI scales to millions of data points with stochastic optimization. This work has diverse applications. First, we analyze a classical problem from the approximate Bayesian computation (ABC) literature, where the model simulates an ecological system [3]. We analyze 100,000 time series which is not possible with traditional methods. Second, we analyze a Bayesian GAN, which is a GAN with a prior over its weights. Bayesian GANs outperform corresponding Bayesian neural networks with known likelihoods on several classification tasks. Third, we show how injecting noise into hidden units of recurrent neural networks corresponds to a deep implicit model for flexible sequence generation. Related Work. This paper connects closely to three lines of work. The first is Bayesian inference for implicit models, known in the statistics literature as approximate Bayesian computation (ABC) [3, 33]. ABC steps around the intractable likelihood by applying summary statistics to measure the closeness of simulated samples to real observations. While successful in many domains, ABC has shortcomings. First, the results generated by ABC depend heavily on the chosen summary statistics and the closeness measure. Second, as the dimensionality grows, closeness becomes harder to achieve. This is the classic curse of dimensionality. The second is GANs [18]. GANs have seen much interest since their conception, providing an efficient method for estimation in neural network-based simulators. Larsen et al. [28] propose a hybrid of variational methods and GANs for improved reconstruction. Chen et al. [7] apply information penalties to disentangle factors of variation. Donahue et al. [12], Dumoulin et al. [13] propose to match on an augmented space, simultaneously training the model and an inverse mapping from data to noise. Unlike any of the above, we develop models with explicit priors on latent variables, hierarchies, and sequences, and we generalize GANs to perform Bayesian inference. The final thread is variational inference with expressive approximations [45, 48, 52]. The idea of casting the design of variational families as a modeling problem was proposed in Ranganath et al. [44]. Further advances have analyzed variational programs [42] a family of approximations which only requires a process returning samples and which has seen further interest [30]. Implicit-like variational approximations have also appeared in auto-encoder frameworks [32, 34] and message passing [24]. We build on variational programs for inferring implicit models. 2 Hierarchical Implicit Models Hierarchical models play an important role in sharing statistical strength across examples [16]. For a broad class of hierarchical Bayesian models, the joint distribution of the hidden and observed variables is p(x, z, β) = p(β) n=1 p(xn | zn, β)p(zn | β), (1) where xn is an observation, zn are latent variables associated to that observation (local variables), and β are latent variables shared across observations (global variables). See Fig. 1 (left). With hierarchical models, local variables can be used for clustering in mixture models, mixed memberships in topic models [4], and factors in probabilistic matrix factorization [47]. Global variables can be used to pool information across data points for hierarchical regression [16], topic models [4], and Bayesian nonparametrics [50]. Hierarchical models typically use a tractable likelihood p(xn | zn, β). But many likelihoods of interest, such as simulator-based models [20] and generative adversarial networks [18], admit high Figure 1: (left) Hierarchical model, with local variables z and global variables β. (right) Hierarchical implicit model. It is a hierarchical model where x is a deterministic function (denoted with a square) of noise ϵ (denoted with a triangle). fidelity to the true data generating process and do not admit a tractable likelihood. To overcome this limitation, we develop hierarchical implicit models (HIMs). Hierarchical implicit models have the same joint factorization as Eq.1 but only assume that one can sample from the likelihood. Rather than define p(xn | zn, β) explicitly, HIMs define a function g that takes in random noise ϵn s( ) and outputs xn given zn and β, xn = g(ϵn | zn, β), ϵn s( ). The induced, implicit likelihood of xn A given zn and β is P(xn A | zn, β) = Z {g(ϵn | zn,β)=xn A} s(ϵn) dϵn. This integral is typically intractable. It is difficult to find the set to integrate over, and the integration itself may be expensive for arbitrary noise distributions s( ) and functions g. Fig. 1 (right) displays the graphical model for HIMs. Noise (ϵn) are denoted by triangles; deterministic computation (xn) are denoted by squares. We illustrate two examples. Example: Physical Simulators. Given initial conditions, simulators describe a stochastic process that generates data. For example, in population ecology, the Lotka-Volterra model simulates predator-prey populations over time via a stochastic differential equation [55]. For prey and predator populations x1, x2 R+ respectively, one process is dx1 dt = β1x1 β2x1x2 + ϵ1, ϵ1 Normal(0, 10), dt = β2x2 + β3x1x2 + ϵ2, ϵ2 Normal(0, 10), where Gaussian noises ϵ1, ϵ2 are added at each full time step. The simulator runs for T time steps given initial population sizes for x1, x2. Lognormal priors are placed over β. The Lotka-Volterra model is grounded by theory but features an intractable likelihood. We study it in 4. Example: Bayesian Generative Adversarial Network. Generative adversarial networks (GANs) define an implicit model and a method for parameter estimation [18]. They are known to perform well on image generation [41]. Formally, the implicit model for a GAN is xn = g(ϵn; θ), ϵn s( ), (2) where g is a neural network with parameters θ, and s is a standard normal or uniform. The neural network g is typically not invertible; this makes the likelihood intractable. The parameters θ in GANs are estimated by divergence minimization between the generated and real data. We make GANs amenable to Bayesian analysis by placing a prior on the parameters θ. We call this a Bayesian GAN. Bayesian GANs enable modeling of parameter uncertainty and are inspired by Bayesian neural networks, which have been shown to improve the uncertainty and data efficiency of standard neural networks [31, 37]. We study Bayesian GANs in 4; Appendix B provides example implementations in the Edward probabilistic programming language [53]. 3 Likelihood-Free Variational Inference We described hierarchical implicit models, a rich class of latent variable models with local and global structure alongside an implicit density. Given data, we aim to calculate the model s posterior p(z, β | x) = p(x, z, β)/p(x). This is difficult as the normalizing constant p(x) is typically intractable. With implicit models, the lack of a likelihood function introduces an additional source of intractability. We use variational inference [23]. It posits an approximating family q Q and optimizes to find the member closest to p(z, β | x). There are many choices of variational objectives that measure closeness [42, 29, 10]. To choose an objective, we lay out desiderata for a variational inference algorithm for implicit models: 1. Scalability. Machine learning hinges on stochastic optimization to scale to massive data [6]. The variational objective should admit unbiased subsampling with the standard technique, n=1 f(xn) N where some computation f( ) over the full data is approximated with a mini-batch of data {xm}. 2. Implicit Local Approximations. Implicit models specify flexible densities; this induces very complex posterior distributions. Thus we would like a rich approximating family for the per-data point approximations q(zn | xn, β). This means the variational objective should only require that one can sample zn q(zn | xn, β) and not evaluate its density. One variational objective meeting our desiderata is based on the classical minimization of the Kullback-Leibler (KL) divergence. (Surprisingly, Appendix C details how the KL is the only possible objective among a broad class.) 3.1 KL Variational Objective Classical variational inference minimizes the KL divergence from the variational approximation q to the posterior. This is equivalent to maximizing the evidence lower bound (ELBO), L = Eq(β,z | x)[log p(x, z, β) log q(β, z | x)]. (3) Let q factorize in the same way as the posterior, q(β, z | x) = q(β) n=1 q(zn | xn, β), where q(zn | xn, β) is an intractable density and since the data x is constant during inference, we drop conditioning for the global q(β). Substituting p and q s factorization yields L = Eq(β)[log p(β) log q(β)] + n=1 Eq(β)q(zn | xn,β)[log p(xn, zn | β) log q(zn | xn, β)]. This objective presents difficulties: the local densities p(xn, zn | β) and q(zn | xn, β) are both intractable. To solve this, we consider ratio estimation. 3.2 Ratio Estimation for the KL Objective Let q(xn) be the empirical distribution on the observations x and consider using it in a variational joint q(xn, zn | β) = q(xn)q(zn | xn, β). Now subtract the log empirical log q(xn) from the ELBO above. The ELBO reduces to L Eq(β)[log p(β) log q(β)] + n=1 Eq(β)q(zn | xn,β) log p(xn, zn | β) q(xn, zn | β) (Here the proportionality symbol means equality up to additive constants.) Thus the ELBO is a function of the ratio of two intractable densities. If we can form an estimator of this ratio, we can proceed with optimizing the ELBO. We apply techniques for ratio estimation [49]. It is a key idea in GANs [35, 54], and similar ideas have rearisen in statistics and physics [19, 8]. In particular, we use class probability estimation: given a sample from p( ) or q( ) we aim to estimate the probability that it belongs to p( ). We model this using σ(r( ; θ)), where r is a parameterized function (e.g., neural network) taking sample inputs and outputting a real value; σ is the logistic function outputting the probability. We train r( ; θ) by minimizing a loss function known as a proper scoring rule [17]. For example, in experiments we use the log loss, Dlog = Ep(xn,zn | β)[ log σ(r(xn, zn, β; θ))] + Eq(xn,zn | β)[ log(1 σ(r(xn, zn, β; θ)))]. (5) The loss is zero if σ(r( ; θ)) returns 1 when a sample is from p( ) and 0 when a sample is from q( ). (We also experiment with the hinge loss; see 4.) If r( ; θ) is sufficiently expressive, minimizing the loss returns the optimal function [35], r (xn, zn, β) = log p(xn, zn | β) log q(xn, zn | β). As we minimize Eq.5, we use r( ; θ) as a proxy to the log ratio in Eq.4. Note r estimates the log ratio; it s of direct interest and more numerically stable than the ratio. The gradient of Dlog with respect to θ is Ep(xn,zn | β)[ θ log σ(r(xn, zn, β; θ))] + Eq(xn,zn | β)[ θ log(1 σ(r(xn, zn, β; θ)))]. (6) We compute unbiased gradients with Monte Carlo. 3.3 Stochastic Gradients of the KL Objective To optimize the ELBO, we use the ratio estimator, L = Eq(β | x)[log p(β) log q(β)] + n=1 Eq(β | x)q(zn | xn,β)[r(xn, zn, β)]. (7) All terms are now tractable. We can calculate gradients to optimize the variational family q. Below we assume the priors p(β), p(zn | β) are differentiable. (We discuss methods to handle discrete global variables in the next section.) We focus on reparameterizable variational approximations [26, 46]. They enable sampling via a differentiable transformation T of random noise, δ s( ). Due to Eq.7, we require the global approximation q(β; λ) to admit a tractable density. With reparameterization, its sample is β = Tglobal(δglobal; λ), δglobal s( ), for a choice of transformation Tglobal( ; λ) and noise s( ). For example, setting s( ) = N(0, 1) and Tglobal(δglobal) = µ + σδglobal induces a normal distribution N(µ, σ2). Similarly for the local variables zn, we specify zn = Tlocal(δn, xn, β; φ), δn s( ). Unlike the global approximation, the local variational density q(zn | xn; φ) need not be tractable: the ratio estimator relaxes this requirement. It lets us leverage implicit models not only for data but also for approximate posteriors. In practice, we also amortize computation with inference networks, sharing parameters φ across the per-data point approximate posteriors. The gradient with respect to global parameters λ under this approximating family is λL = Es(δglobal)[ λ(log p(β) log q(β))]] + n=1 Es(δglobal)sn(δn)[ λr(xn, zn, β)]. (8) The gradient backpropagates through the local sampling zn = Tlocal(δn, xn, β; φ) and the global reparameterization β = Tglobal(δglobal; λ). We compute unbiased gradients with Monte Carlo. The gradient with respect to local parameters φ is n=1 Eq(β)s(δn)[ φr(xn, zn, β)]. (9) where the gradient backpropagates through Tlocal.1 Algorithm 1: Likelihood-free variational inference (LFVI) Input : Model xn, zn p( | β), p(β) Variational approximation zn q( | xn, β; φ), q(β | x; λ), Ratio estimator r( ; θ) Output: Variational parameters λ, φ Initialize θ, λ, φ randomly. while not converged do Compute unbiased estimate of θD (Eq.6), λL (Eq.8), φL (Eq.9). Update θ, λ, φ using stochastic gradient descent. end 3.4 Algorithm Algorithm 1 outlines the procedure. We call it likelihood-free variational inference (LFVI). LFVI is black box: it applies to models in which one can simulate data and local variables, and calculate densities for the global variables. LFVI first updates θ to improve the ratio estimator r. Then it uses r to update parameters {λ, φ} of the variational approximation q. We optimize r and q simultaneously. The algorithm is available in Edward [53]. LFVI is scalable: we can unbiasedly estimate the gradient over the full data set with mini-batches [22]. The algorithm can also handle models of either continuous or discrete data. The requirement for differentiable global variables and reparameterizable global approximations can be relaxed using score function gradients [43]. Point estimates of the global parameters β suffice for many applications [18, 46]. Algorithm 1 can find point estimates: place a point mass approximation q on the parameters β. This simplifies gradients and corresponds to variational EM. 4 Experiments We developed new models and inference. For experiments, we study three applications: a largescale physical simulator for predator-prey populations in ecology; a Bayesian GAN for supervised classification; and a deep implicit model for symbol generation. In addition, Appendix F, provides practical advice on how to address the stability of the ratio estimator by analyzing a toy experiment. We initialize parameters from a standard normal and apply gradient descent with ADAM. Lotka-Volterra Predator-Prey Simulator. We analyze the Lotka-Volterra simulator of 2 and follow the same setup and hyperparameters of Papamakarios and Murray [38]. Its global variables β govern rates of change in a simulation of predator-prey populations. To infer them, we posit a mean-field normal approximation (reparameterized to be on the same support) and run Algorithm 1 with both a log loss and hinge loss for the ratio estimation problem; Appendix D details the hinge loss. We compare to rejection ABC, MCMC-ABC, and SMC-ABC [33]. MCMC-ABC uses a spherical Gaussian proposal; SMC-ABC is manually tuned with a decaying epsilon schedule; all ABC methods are tuned to use the best performing hyperparameters such as the tolerance error. Fig. 2 displays results on two data sets. In the top figures and bottom left, we analyze data consisting of a simulation for T = 30 time steps, with recorded values of the populations every 0.2 time units. The bottom left figure calculates the negative log probability of the true parameters over the tolerance error for ABC methods; smaller tolerances result in more accuracy but slower runtime. The top figures compare the marginal posteriors for two parameters using the smallest tolerance for the ABC methods. Rejection ABC, MCMC-ABC, and SMC-ABC all contain the true parameters in their 95% credible interval but are less confident than our methods. Further, they required 100, 000 simulations from the model, with an acceptance rate of 0.004% and 2.990% for rejection ABC and MCMC-ABC respectively. 1The ratio r indirectly depends on φ but its gradient w.r.t. φ disappears. This is derived via the score function identity and the product rule (see, e.g., Ranganath et al. [43, Appendix]). Neg. log probability of true parameters Rej ABC MCMC-ABC SMC-ABC Figure 2: (top) Marginal posterior for first two parameters. (bot. left) ABC methods over tolerance error. (bot. right) Marginal posterior for first parameter on a large-scale data set. Our inference achieves more accurate results and scales to massive data. Test Set Error Model + Inference Crabs Pima Covertype MNIST Bayesian GAN + VI 0.03 0.232 0.154 0.0136 Bayesian GAN + MAP 0.12 0.240 0.185 0.0283 Bayesian NN + VI 0.02 0.242 0.164 0.0311 Bayesian NN + MAP 0.05 0.320 0.188 0.0623 Table 1: Classification accuracy of Bayesian GAN and Bayesian neural networks across small to medium-size data sets. Bayesian GANs achieve comparable or better performance to their Bayesian neural net counterpart. The bottom right figure analyzes data consisting of 100, 000 time series, each of the same size as the single time series analyzed in the previous figures. This size is not possible with traditional methods. Further, we see that with our methods, the posterior concentrates near the truth. We also experienced little difference in accuracy between using the log loss or the hinge loss for ratio estimation. Bayesian Generative Adversarial Networks. We analyze Bayesian GANs, described in 2. Mimicking a use case of Bayesian neural networks [5, 21], we apply Bayesian GANs for classification on small to medium-size data. The GAN defines a conditional p(yn | xn), taking a feature xn RD as input and generating a label yn {1, . . . , K}, via the process yn = g(xn, ϵn | θ), ϵn N(0, 1), (10) where g( | θ) is a 2-layer multilayer perception with Re LU activations, batch normalization, and is parameterized by weights and biases θ. We place normal priors, θ N(0, 1). We analyze two choices of the variational model: one with a mean-field normal approximation for q(θ | x), and another with a point mass approximation (equivalent to maximum a posteriori). We compare to a Bayesian neural network, which uses the same generative process as Eq.10 but draws from a Categorical distribution rather than feeding noise into the neural net. We fit it separately using a mean-field normal approximation and maximum a posteriori. Table 1 shows that Bayesian GANs generally outperform their Bayesian neural net counterpart. Note that Bayesian GANs can analyze discrete data such as in generating a classification label. Traditional GANs for discrete data is an open challenge [27]. In Appendix E, we compare Bayesian GANs with point estimation to typical GANs. Bayesian GANs are also able to leverage parameter uncertainty for analyzing these small to medium-size data sets. One problem with Bayesian GANs is that they cannot work with very large neural networks: the ratio estimator is a function of global parameters, and thus the input size grows with the size of the xt 1 xt xt+1 zt 1 zt zt+1 (a) A deep implicit model for sequences. It is a recurrent neural network (RNN) with noise injected into each hidden state. The hidden state is now an implicit latent variable. The same occurs for generating outputs. 1 x+x/x x //x x+ 2 x/x x+x x/x+x+x+ 3 /+x x+x x/x/x+x+ 4 /x+ x+x x/x+x x+ 5 x/x x/x x+x+x+x 6 x+x+x/x x x+x/x+ (b) Generated symbols from the implicit model. Good samples place arithmetic operators between the variable x. The implicit model learned to follow rules from the context free grammar up to some multiple operator repeats. neural network. One approach is to make the ratio estimator not a function of the global parameters. Instead of optimizing model parameters via variational EM, we can train the model parameters by backpropagating through the ratio objective instead of the variational objective. An alternative is to use the hidden units as input which is much lower dimensional [51, Appendix C]. Injecting Noise into Hidden Units. In this section, we show how to build a hierarchical implicit model by simply injecting randomness into hidden units. We model sequences x = (x1, . . . , x T ) with a recurrent neural network. For t = 1, . . . , T, zt = gz(xt 1, zt 1, ϵt,z), ϵt,z N(0, 1), xt = gx(zt, ϵt,x), ϵt,x N(0, 1), where gz and gx are both 1-layer multilayer perceptions with Re LU activation and layer normalization. We place standard normal priors over all weights and biases. See Fig. 3a. If the injected noise ϵt,z combines linearly with the output of gz, the induced distribution p(zt | xt 1, zt 1) is Gaussian parameterized by that output. This defines a stochastic RNN [2, 14], which generalizes its deterministic connection. With nonlinear combinations, the implicit density is more flexible (and intractable), making previous methods for inference not applicable. In our method, we perform variational inference and specify q to be implicit; we use the same architecture as the probability model s implicit priors. We follow the same setup and hyperparameters as Kusner and Hernández-Lobato [27] and generate simple one-variable arithmetic sequences following a context free grammar, S x S + S S S S S S/S, where divides possible productions of the grammar. We concatenate the inputs and point estimate the global variables (model parameters) using variational EM. Fig. 3b displays samples from the inferred model, training on sequences with a maximum of 15 symbols. It achieves sequences which roughly follow the context free grammar. 5 Discussion We developed a class of hierarchical implicit models and likelihood-free variational inference, merging the idea of implicit densities with hierarchical Bayesian modeling and approximate posterior inference. This expands Bayesian analysis with the ability to apply neural samplers, physical simulators, and their combination with rich, interpretable latent structure. More stable inference with ratio estimation is an open challenge. This is especially important when we analyze large-scale real world applications of implicit models. Recent work for genomics offers a promising solution [51]. Acknowledgements. We thank Balaji Lakshminarayanan for discussions which helped motivate this work. We also thank Christian Naesseth, Jaan Altosaar, and Adji Dieng for their feedback and comments. DT is supported by a Google Ph.D. Fellowship in Machine Learning and an Adobe Research Fellowship. This work is also supported by NSF IIS-0745520, IIS-1247664, IIS1009542, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, N66001-15-C-4032, Facebook, Adobe, Amazon, and the John Templeton Foundation. [1] Anelli, G., Antchev, G., Aspell, P., Avati, V., Bagliesi, M., Berardi, V., Berretti, M., Boccone, V., Bottigli, U., Bozzo, M., et al. (2008). The totem experiment at the CERN large Hadron collider. Journal of Instrumentation, 3(08):S08007. [2] Bayer, J. and Osendorfer, C. (2014). Learning stochastic recurrent networks. ar Xiv preprint ar Xiv:1411.7610. [3] Beaumont, M. A. (2010). Approximate Bayesian computation in evolution and ecology. Annual Review of Ecology, Evolution and Systematics, 41(379-406):1. [4] Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993 1022. [5] Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in neural network. In International Conference on Machine Learning. [6] Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT 2010, pages 177 186. Springer. [7] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. (2016). Info GAN: Interpretable representation learning by information maximizing generative adversarial nets. In Neural Information Processing Systems. [8] Cranmer, K., Pavez, J., and Louppe, G. (2015). Approximating likelihood ratios with calibrated discriminative classifiers. ar Xiv preprint ar Xiv:1506.02169. [9] Diaconis, P., Holmes, S., and Montgomery, R. (2007). Dynamical bias in the coin toss. SIAM, 49(2):211 235. [10] Dieng, A. B., Tran, D., Ranganath, R., Paisley, J., and Blei, D. M. (2017). The χ-Divergence for Approximate Inference. In Neural Information Processing Systems. [11] Diggle, P. J. and Gratton, R. J. (1984). Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society: Series B (Methodological), pages 193 227. [12] Donahue, J., Krähenbühl, P., and Darrell, T. (2017). Adversarial feature learning. In International Conference on Learning Representations. [13] Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., and Courville, A. (2017). Adversarially learned inference. In International Conference on Learning Representations. [14] Fraccaro, M., Sønderby, S. K., Paquet, U., and Winther, O. (2016). Sequential neural models with stochastic layers. In Neural Information Processing Systems. [15] Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian data analysis. Texts in Statistical Science Series. CRC Press, Boca Raton, FL. [16] Gelman, A. and Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press. [17] Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359 378. [18] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Neural Information Processing Systems. [19] Gutmann, M. U., Dutta, R., Kaski, S., and Corander, J. (2014). Statistical Inference of Intractable Generative Models via Classification. ar Xiv preprint ar Xiv:1407.4981. [20] Hartig, F., Calabrese, J. M., Reineking, B., Wiegand, T., and Huth, A. (2011). Statistical inference for stochastic simulation models theory and application. Ecology Letters, 14(8):816 827. [21] Hernández-Lobato, J. M., Li, Y., Rowland, M., Hernández-Lobato, D., Bui, T., and Turner, R. E. (2016). Black-box α-divergence minimization. In International Conference on Machine Learning. [22] Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. W. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303 1347. [23] Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning. [24] Karaletsos, T. (2016). Adversarial message passing for graphical models. In NIPS Workshop. [25] Keller, J. B. (1986). The probability of heads. The American Mathematical Monthly, 93(3):191 197. [26] Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations. [27] Kusner, M. J. and Hernández-Lobato, J. M. (2016). GANs for sequences of discrete elements with the Gumbel-Softmax distribution. In NIPS Workshop. [28] Larsen, A. B. L., Sønderby, S. K., Larochelle, H., and Winther, O. (2016). Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning. [29] Li, Y. and Turner, R. E. (2016). Rényi Divergence Variational Inference. In Neural Information Processing Systems. [30] Liu, Q. and Feng, Y. (2016). Two methods for wild variational inference. ar Xiv preprint ar Xiv:1612.00081. [31] Mac Kay, D. J. C. (1992). Bayesian methods for adaptive models. Ph D thesis, California Institute of Technology. [32] Makhzani, A., Shlens, J., Jaitly, N., and Goodfellow, I. (2015). Adversarial autoencoders. ar Xiv preprint ar Xiv:1511.05644. [33] Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167 1180. [34] Mescheder, L., Nowozin, S., and Geiger, A. (2017). Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. ar Xiv preprint ar Xiv:1701.04722. [35] Mohamed, S. and Lakshminarayanan, B. (2016). Learning in implicit generative models. ar Xiv preprint ar Xiv:1610.03483. [36] Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. MIT Press. [37] Neal, R. M. (1994). Bayesian Learning for Neural Networks. Ph D thesis, University of Toronto. [38] Papamakarios, G. and Murray, I. (2016). Fast ϵ-free inference of simulation models with Bayesian conditional density estimation. In Neural Information Processing Systems. [39] Pearl, J. (2000). Causality. Cambridge University Press. [40] Pritchard, J. K., Seielstad, M. T., Perez-Lezaun, A., and Feldman, M. W. (1999). Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Molecular Biology and Evolution, 16(12):1791 1798. [41] Radford, A., Metz, L., and Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations. [42] Ranganath, R., Altosaar, J., Tran, D., and Blei, D. M. (2016a). Operator variational inference. In Neural Information Processing Systems. [43] Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Artificial Intelligence and Statistics. [44] Ranganath, R., Tran, D., and Blei, D. M. (2016b). Hierarchical variational models. In International Conference on Machine Learning. [45] Rezende, D. J. and Mohamed, S. (2015). Variational inference with normalizing flows. In International Conference on Machine Learning. [46] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning. [47] Salakhutdinov, R. and Mnih, A. (2008). Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In International Conference on Machine Learning, pages 880 887. ACM. [48] Salimans, T., Kingma, D. P., and Welling, M. (2015). Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning. [49] Sugiyama, M., Suzuki, T., and Kanamori, T. (2012). Density-ratio matching under the Bregman divergence: A unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics. [50] Teh, Y. W. and Jordan, M. I. (2010). Hierarchical Bayesian nonparametric models with applications. Bayesian Nonparametrics, 1. [51] Tran, D. and Blei, D. M. (2017). Implicit causal models for genome-wide association studies. ar Xiv preprint ar Xiv:1710.10742. [52] Tran, D., Blei, D. M., and Airoldi, E. M. (2015). Copula variational inference. In Neural Information Processing Systems. [53] Tran, D., Kucukelbir, A., Dieng, A. B., Rudolph, M., Liang, D., and Blei, D. M. (2016). Edward: A library for probabilistic modeling, inference, and criticism. ar Xiv preprint ar Xiv:1610.09787. [54] Uehara, M., Sato, I., Suzuki, M., Nakayama, K., and Matsuo, Y. (2016). Generative adversarial nets from a density ratio estimation perspective. ar Xiv preprint ar Xiv:1610.02920. [55] Wilkinson, D. J. (2011). Stochastic modelling for systems biology. CRC press.