# generative_modeling_of_convolutional_neural_networks__48f47d41.pdf Published as a conference paper at ICLR 2015 GENERATIVE MODELING OF CONVOLUTIONAL NEURAL NETWORKS Jifeng Dai Microsoft Research jifdai@microsoft.com Yang Lu and Ying Nian Wu University of California, Los Angeles {yanglv, ywu}@stat.ucla.edu This paper investigates generative modeling of the convolutional neural networks (CNNs). The main contributions include: (1) We construct a generative model for CNNs in the form of exponential tilting of a reference distribution. (2) We propose a generative gradient for pre-training CNNs by a non-parametric importance sampling scheme, which is fundamentally different from the commonly used discriminative gradient, and yet has the same computational architecture and cost as the latter. (3) We propose a generative visualization method for the CNNs by sampling from an explicit parametric image distribution. The proposed visualization method can directly draw synthetic samples for any given node in a trained CNN by the Hamiltonian Monte Carlo (HMC) algorithm, without resorting to any extra hold-out images. Experiments on the Image Net benchmark show that the proposed generative gradient pre-training helps improve the performances of CNNs, and the proposed generative visualization method generates meaningful and varied samples of synthetic images from a large and deep CNN. 1 INTRODUCTION Recent years have witnessed the triumphant return of the feedforward neural networks, especially the convolutional neural networks (CNNs) (Le Cun et al., 1989; Krizhevsky et al., 2012; Girshick et al., 2014). Despite the successes of the discriminative learning of CNNs, the generative aspect of CNNs has not been thoroughly investigated. But it can be very useful for the following reasons: (1) The generative pre-training has the potential to lead the network to a better local optimum; (2) Samples can be drawn from the generative model to reveal the knowledge learned by the CNN. Although many generative models and learning algorithms have been proposed (Hinton et al., 2006a;b; Rifai et al., 2011; Salakhutdinov & Hinton, 2009), most of them have not been applied to learning large and deep CNNs. In this paper, we study the generative modeling of the CNNs. We start from defining probability distributions of images given the underlying object categories or class labels, such that the CNN with a final logistic regression layer serves as the corresponding conditional distribution of the class labels given the images. These distributions are in the form of exponential tilting of a reference distribution, i.e., exponential family models or energy-based models relative to a reference distribution. With such a generative model, we proceed to study it along two related themes, which differ in how to handle the reference distribution or the null model. In the first theme, we propose a non-parametric generative gradient for pre-training the CNN, where the CNN is learned by the stochastic gradient algorithm that seeks to minimize the log-likelihood of the generative model. The gradient of the loglikelihood is approximated by the importance sampling method that keeps reweighing the images that are sampled from a non-parametric implicit reference distribution, such as the distribution of all the training images. The generative gradient is fundamentally different from the commonly used discriminative gradient, and yet in batch training, it shares the same computational architecture as well as computational cost as the discriminative gradient. This generative learning scheme can be ar Xiv:1412.6296v2 [cs.CV] 9 Apr 2015 Published as a conference paper at ICLR 2015 used in a pre-training stage that is to be followed by the usual discriminative training. The generative log-likelihood provides stronger driving force than the discriminative criteria for stochastic gradient by requiring the learned parameters to explain the images instead of their labels. Experiments on the MNIST (Le Cun et al., 1998) and the Image Net (Deng et al., 2009) classification benchmarks show that this generative pre-training scheme helps improve the performance of CNNs. The second theme in our study of generative modeling is to assume an explicit parametric form of the reference distribution, such as the Gaussian white noise model, so that we can draw synthetic images from the resulting probability distributions of images. The sampling can be accomplished by the Hamiltonian Monte Carlo (HMC) algorithm (Neal, 2011), which iterates between a bottomup convolution step and a top-down deconvolution step. The proposed visualization method can directly draw samples of synthetic images for any given node in a trained CNN, without resorting to any extra hold-out images. Experiments show that meaningful and varied synthetic images can be generated for nodes of a large and deep CNN discriminatively trained on Image Net. 2 PAST WORK The generative model that we study is an energy-based model. Such models include field of experts (Roth & Black, 2009), product of experts (Hinton, 2002), Boltzmann machines (Hinton et al., 2006a), model based on neural networks (Hinton et al., 2006b), etc. However, most of these generative models and learning algorithms have not been applied to learning large and deep CNNs. The relationship between the generative models and the discriminative approaches has been extensively studied (Jordan, 2002; Liang & Jordan, 2008). The usefulness of generative pre-training for deep learning has been studied by Erhan et al. (2010) etc. However, this issue has not been thoroughly investigated for CNNs. As to visualization, our work is related to Erhan et al. (2009); Le et al. (2012); Girshick et al. (2014); Zeiler & Fergus (2013); Long et al. (2014). In Girshick et al. (2014); Long et al. (2014), the highscoring image patches are directly presented. In Zeiler & Fergus (2013), a top-down deconvolution process is employed to understand what contents are emphasized in the high-scoring input image patches. In Erhan et al. (2009); Le et al. (2012); Simonyan et al. (2014), images are synthesized by maximizing the response of a given node in the network. In our work, a generative model is formally defined. We sample from the well-defined probability distribution by the HMC algorithm, generating meaningful and varying synthetic images, without resorting to a large collection of holdout images (Girshick et al., 2014; Zeiler & Fergus, 2013; Long et al., 2014). 3 GENERATIVE MODEL BASED ON CNN 3.1 PROBABILITY DISTRIBUTIONS ON IMAGES Suppose we observe images from many different object categories. Let x be an image from an object category y. Consider the following probability distribution on x, py(x; w) = 1 Zy(w) exp (fy(x; w)) q(x), (1) where q(x) is a reference distribution common to all the categories, fy(x; w) is a scoring function for class y, w collects the unknown parameters to be learned from the data, and Zy(w) = Eq[exp(fy(x; w))] = R exp(fy(x; w))q(x)dx is the normalizing constant or partition function. The distribution py(x; w) is in the form of an exponential tilting of the reference distribution q(x), and can be considered an energy-based model or an exponential family model. In Model (1), the reference distribution q(x) may not be unique. If we change q(x) to q1(x), then we can change fy(x; w) to fy(x; w) log[q1(x)/q(x)], which may correspond to a fy(x; w1) for a different w1 if the parametrization of fy(x, w) is flexible enough. We want to choose q(x) so that either q(x) is reasonably close to py(x; w) as in our non-parametric generative gradient method, or the resulting py(x; w) based on q(x) is easy to sample from as in our generative visualization method. Published as a conference paper at ICLR 2015 For an image x, let y be the underlying object category or class label, so that p(x|y; w) = py(x; w). Suppose the prior distribution on y is p(y) = ρy. The posterior distribution of y given x is p(y|x, w) = exp(fy(x; w) + αy) P y exp(fy(x; w) + αy), (2) where αy = log ρy log Zy(w). p(y|x, w) is in the form of a multi-class logistic regression, where αy can be treated as an intercept parameter to be estimated directly if the model is trained discriminatively. Thus for notational simplicity, we shall assume that the intercept term αy is already absorbed into w for the rest of the paper. Note that fy(x; w) is not unique in (2). If we change fy(x; w) to fy(x; w) g(x) for a g(x) that is common to all the categories, we still have the same p(y|x; w). This non-uniqueness corresponds to the non-uniqueness of q(x) in (1) mentioned above. Given a set of labeled data {(xi, yi)}, equations (1) and (2) suggest two different methods to estimate the parameters w. One is to maximize the generative log-likelihood l G(w) = P i log p(xi|yi, w), which is the same as maximizing the full log-likelihood P i log p(xi, yi|w), where the prior probability of ρy can be estimated by class frequency of category y. The other is to maximize the discriminative log-likelihood l D(w) = P i log p(yi|xi, w). For the discriminative model (2), a popular choice of fy(x; w) is multi-layer perceptron or CNN, with w being the connection weights, and the top-layer is a multi-class logistic regression. This is the choice we adopt throughout this paper. 3.2 GENERATIVE GRADIENT The gradient of the discriminative log-likelihood is calculated according to w log p(yi|xi, w) = wfyi(xi; w) ED wfy(xi; w) , (3) where αy is absorbed into w as mentioned above, and the expectation for discriminative gradient is wfy(xi; w) = X wfy(xi; w) exp(fy(xi; w)) P y exp(fy(xi; w)). (4) The gradient of the generative log-likelihood is calculated according to w log pyi(xi; w) = wfyi(xi; w) EG wfyi(x; w) , (5) where the expectation for generative gradient is wfyi(x; w) = Z wfyi(x; w) 1 Zyi(w) exp(fyi(x; w))q(x), (6) which can be approximated by importance sampling. Specifically, let { xj}m j=1 be a set of samples from q(x), for instance, q(x) is the distribution of images from all the categories. Here we do not attempt to model q(x) parametrically, instead, we treat it as an implicit non-parametric distribution. Then by importance sampling, wfyi(x; w) X wfyi( xj; w)Wj, (7) where the importance weight Wj exp(fyi( xj; w)) and is normalized to have sum 1. Namely, w log pyi(xi; w) wfyi(xi; w) X wfyi( xj; w) exp(fyi( xj; w)) P k exp(fyi( xk; w)). (8) The discriminative gradient and the generative gradient differ subtly and yet fundamentally in calculating E[ fy(x; w)/ w], whose difference from the observed fyi(xi; w)/ w provides the driving force for updating w. In the discriminative gradient, the expectation is with respect to the posterior distribution of the class label y while the image xi is fixed, whereas in the generative gradient, the expectation is with respect to the distribution of the images x while the class label yi is fixed. In Published as a conference paper at ICLR 2015 general, it is easier to adjust the parameters w to predict the class labels than to reproduce the features of the images. So it is expected that the generative gradient provides stronger driving force for updating w. The non-parametric generative gradient can be especially useful in the beginning stage of training or what can be called pre-training, where w is small, so that the current py(x; w) for each category y is not very separated from q(x), which is the overall distribution of x. In this stage, the importance weights Wj are not very skewed and the effective sample size for importance sampling can be large. So updating w according to the generative gradient can provide useful pre-training with the potential to lead w toward a good local optimum. If the importance weights Wj start to become skewed and the effective sample size starts to dwindle, then this indicates that the categories py(x; w) start to separate from q(x) as well as from each other, so we can switch to discriminative training to further separate the categories. 3.3 BATCH TRAINING AND GENERATIVE LOSS LAYER At first glance, the generative gradient appears computationally expensive due to the need to sample from q(x). In fact, with q(x) being the collection of images from all the categories, we may use each batch of samples as an approximation to q(x) in the batch training mode. Specifically, let {(xi, yi)}n i=1 be a batch set of training examples, and we seek to maximize P i log pyi(xi; w) via generative gradient. In the calculation of log pyi(xi; w)/ w, {xj}n j=1 can be used as samples from q(x). In this way, the computational cost of the generative gradient is about the same as that of the discriminative gradient. Moreover, the computation of the generative gradient can be induced to share the same back propagation architecture as the discriminative gradient. Specifically, the calculation of the generative gradient can be decoupled into the calculation at a new generative loss layer and the calculation at lower layers. To be more specific, by replacing { xj}m j=1 in (8) by the batch sample {xj}n j=1, we can rewrite (8) in the following form: w log pyi(xi; w) X log pyi(xi; w) fy(xj; w) fy(xj; w) where log pyi(xi; w)/ fy(xj; w) is called the generative loss layer (to be defined below, with fy(xj; w) being treated here as a variable in the chain rule), while the calculation of fy(xj; w)/ w is exactly the same as that in the discriminative gradient. This decoupling brings simplicity to programming. We use the notation log pyi(xi; w)/ fy(xj; w) for the top generative layer mainly to make it conformal to the chain rule calculation. According to (8), log pyi(xi; w)/ fy(xj; w) is defined by log pyi(xi; w) fy(xj; w) = 1 exp(fyi(xj; w)) P k exp(fyi(xk; w)) y = yi, j = i; exp(fyi(xj; w)) P k exp(fyi(xk; w)) y = yi, j = i. 3.4 GENERATIVE VISUALIZATION Recently, researchers are interested in understanding what the machine learns. Suppose we care about the node at the top layer (the idea can be applied to the nodes at any layer). We consider generating samples from py(x; w) with w already learned by discriminative training (or any other methods). For this purpose, we need to assume a parametric reference distribution q(x), such as Gaussian white noise distribution. After discriminatively learning fy(x; w) for all y, we can sample from the corresponding py(x; w) by Hamiltonian Monte Carlo (HMC) (Neal, 2011). Specifically, for any category y, we can write py(x; w) exp( U(x)), where U(x) = fy(x; w)+ |x|2/(2σ2) (σ is the standard deviation of q(x)). In physics context, x is a position vector and U(x) is the potential energy function. To implement Hamiltonian dynamics, we need to introduce Published as a conference paper at ICLR 2015 Iteration 0 Iteration 10 Iteration 50 Iteration 100 Figure 1: The sequence of images sampled from the Starfish, sea star category of the Alex Net network (Krizhevsky et al., 2012) discriminatively trained on Image Net ILSVRC-2012. Table 1: Error rates on the MNIST test set of different training approaches utilizing the Le Net network (Le Cun et al., 1998). Training approaches DG GG GG+DG Error rates 1.03 0.85 0.78 an auxiliary momentum vector φ and the corresponding kinetic energy function K(φ) = |φ|2/2m, where m denotes the mass. Thus, a fictitious physical system described by the canonical coordinates (x, φ) is defined, and its total energy is H(x, φ) = U(x) + K(φ). Each iteration of HMC draws a random sample from the marginal Gaussian distribution of φ, and then evolve according to the Hamiltonian dynamics that conserves the total energy. A key step in the leapfrog algorithm is the computation of the derivative of the potential energy function U/ x, which includes calculating fy(x; w)/ x. The computation of fy(x; w)/ x involves bottom-up convolution and max-pooling, followed by top-down deconvolution and arg-max un-pooling. The max-pooling and arg-max un-pooling are applied to the current synthesized image (not the input image, which is not needed by our method). The top-down derivative computation is derived from HMC, and is different from Zeiler & Fergus (2013). The visualization sequence of a category is shown in Fig. 1. 4 EXPERIMENTS 4.1 GENERATIVE PRE-TRAINING In generative pre-training experiments, three different training approaches are studied: i) discriminative gradient (DG); ii) generative gradient (GG); iii) generative gradient pre-training + discriminative gradient refining (GG+DG). We build algorithms on the code of Caffe (Jia et al., 2014) and the experiment settings are identical to Jia et al. (2014). Experiments are performed on two commonly used image classification benchmarks: MNIST (Le Cun et al., 1998) handwritten digit recognition and Image Net ILSVRC-2012 (Deng et al., 2009) natural image classification. MNIST handwritten digit recognition. We first study generative pre-training on the MNIST dataset. The Le Net network (Le Cun et al., 1998) is utilized, which is default for MNIST in Caffe. Although higher accuracy can be achieved by utilizing deeper networks, random image distortion etc, here we stick to the baseline network for fair comparison and experimental efficiency. Network training and testing are performed on the train and test sets respectively. For all the three training approaches, stochastic gradient descent is performed in training with a batch size of 64, a base learning rate of 0.01, a weight decay term of 0.0005, a momentum term of 0.9, and a max epoch number of 25. For GG+DG, the pre-training stage stops after 16 epochs and the discriminative gradient tuning stage starts with a base learning rate of 0.003. The experimental results are presented in Table 1. The error rate of Le Net trained by discriminative gradient is 1.03%. When trained by generative gradient, the error rate reduces to 0.85%. When generative gradient pre-training and discriminative gradient refining are both applied, the error rate further reduces to 0.78%, which is 0.25% (24% relatively) lower than that of discriminative gradient. Published as a conference paper at ICLR 2015 Table 2: Top-1 classification error rates on the Image Net ILSVRC-2012 val set of different training approaches. Training approaches DG GG GG+DG Alex Net 40.7 45.8 39.6 Zeiler Fergus Net (fast) 38.4 44.3 37.4 Figure 2: Samples from the nodes at the final fully-connected layer in the fully trained Le Net model, which correspond to different handwritten digits. Image Net ILSVRC-2012 natural image classification. In experiments on Image Net ILSVRC2012, two networks are utilized, namely Alex Net (Krizhevsky et al., 2012) and Zeiler Fergus Net (fast) (Zeiler & Fergus, 2013). Network training and testing are performed on the train and val sets respectively. In training, a single network is trained by stochastic gradient descent with a batch size of 256, a base learning rate of 0.01, a weight decay term of 0.0005, a momentum term of 0.9, and a max epoch number of 70. For GG+DG, the pre-training stage stops after 45 epochs and the discriminative gradient tuning stage starts with a base learning rate of 0.003. In testing, top-1 classification error rates are reported on the val set by classifying the center and the four corner crops of the input images. As shown in Table 2, the error rates of discriminative gradient training applied on Alex Net and Zeiler Fergus Net are 40.7% and 38.4% respectively, while the error rates of generative gradient are 45.8% and 44.3% respectively. Generative gradient pre-training followed by discriminative gradient refining achieves error rates of 39.6% and 37.4% respectively, which are 1.1% and 1.0% lower than those of discriminative gradient. Experiment results on MNIST and Image Net ILSVRC-2012 show that generative gradient pretraining followed by discriminative gradient refining improves the classification accuracies for varying networks. At the beginning stage of training, updating network parameters according to the generative gradient provides useful pre-training, which leads the network parameters toward a good local optimum. As to the computational cost, generative gradient is on par with discriminative gradient. The computational cost of the generative loss layer itself is ignorable in the network compared to the computation at the convolutional layers and the fully-connected layers. The total epoch numbers of GG+DG is on par with that of DG. Figure 3: Samples from the nodes at the intermediate convolutional layers (conv1 to conv5) in the fully trained Alex Net model. Published as a conference paper at ICLR 2015 (b) Ostrich (d) Horse cart Figure 4: Samples from the nodes at the final fully-connected layer (fc8) in the fully trained Alex Net model. More examples are included in the supplementary materials. Published as a conference paper at ICLR 2015 4.2 GENERATIVE VISUALIZATION In the generative visualization experiments, we visualize the nodes of the Le Net network and the Alex Net network trained by discriminative gradient on MNIST and Image Net ILSVRC-2012 respectively. The algorithm can visualize networks trained by generative gradient as well. We first visualize the nodes at the final fully-connected layer of Le Net. In the experiments, we delete the drop-out layer to avoid unnecessary noise for visualization. At the beginning of visualization, x is initialized by Gaussian distribution with standard deviation 10. The HMC iteration number, the leapfrog step size, the leapfrog step number, the standard deviation of the reference distribution σ, and the particle mass are set to be 300, 0.0001, 100, 10, and 0.0001 respectively. The visualization results are shown in Fig. 2. We further visualize the nodes in Alex Net, which is a much larger network compared to Le Net. Both nodes from the intermediate convolutional layers (conv1 to conv5) and the final fully-connected layer (fc8) are visualized. To visualize the intermediate layers, for instance the layer conv2 with 256 filters, all layers above conv2 are removed other than the generative visualization layer. The size of the synthesized images are designed so that the dimension of the response from conv2 is 1 1 256. We can visualize each filter by assigning label from 1 to 256. The leapfrog step size, the leapfrog step number, the standard deviation of the reference distribution σ, and the particle mass are set to be 0.000003, 50, 10, and 0.00001 respectively. The HMC iteration numbers are 100 and 500 for nodes from the intermediate convolutional and the final fully-connected layer respectively. The synthesized images for the final layer are initialized from the zero image. The samples from the intermediate convolutional layers and the final fully-connected layer of Alex Net are shown in Fig. 3 and 4 respectively. The HMC algorithm produces meaningful and varied samples, which reveals what is learned by the nodes at different layers of the network. Note that such samples are generated from the trained model directly, without using a large hold-out collection of images as in Girshick et al. (2014); Zeiler & Fergus (2013); Long et al. (2014). As to the computational cost, it varies for nodes at different layers within different networks. On a desktop with GTX Titian, it takes about 0.4 minute to draw a sample for nodes at the final fully-connected layer of Le Net. In Alex Net, for nodes at the first convolutional layer and at the final fully-connected layer, it takes about 0.5 minute and 12 minute to draw a sample respectively. The code can be downloaded at http://www.stat.ucla.edu/ yang.lu/ Project/generative CNN/main.html 5 CONCLUSION Given the recent successes of CNNs, it is worthwhile to explore their generative aspects. In this work, we show that a simple generative model can be constructed based on the CNN. The generative model helps to pre-train the CNN. It also helps to visualize the knowledge of the learned CNN. The proposed visualizing scheme can sample from the generative model, and it may be turned into a parametric generative learning algorithm, where the generative gradient can be approximated by samples generated by the current model. ACKNOWLEDGEMENT The work is supported by NSF DMS 1310391, ONR MURI N00014-10-1-0933, DARPA MSEE FA8650-11-1-7149. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248 255. IEEE, 2009. Erhan, Dumitru, Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Visualizing higher-layer features of a deep network. Dept. IRO, Universit e de Montr eal, Tech. Rep, 2009. Published as a conference paper at ICLR 2015 Erhan, Dumitru, Bengio, Yoshua, Courville, Aaron, Manzagol, Pierre-Antoine, Vincent, Pascal, and Bengio, Samy. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625 660, 2010. Girshick, Ross, Donahue, Jeff, Darrell, Trevor, and Malik, Jitendra. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 580 587. IEEE, 2014. Hinton, Geoffrey. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771 1800, 2002. Hinton, Geoffrey, Osindero, Simon, and Teh, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527 1554, 2006a. Hinton, Geoffrey, Osindero, Simon, Welling, Max, and Teh, Yee-Whye. Unsupervised discovery of nonlinear structure using contrastive backpropagation. Cognitive science, 30(4):725 731, 2006b. Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature embedding. ar Xiv preprint ar Xiv:1408.5093, 2014. Jordan, A. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in neural information processing systems, 14:841, 2002. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097 1105, 2012. Le, Quoc V., Monga, Rajat, Devin, Matthieu, Chen, Kai, Corrado, Greg S., Dean, Jeff, and Ng, Andrew Y. Building high-level features using large scale unsupervised learning. In In International Conference on Machine Learning, 2012. 103, 2012. Le Cun, Yann, Boser, Bernhard, Denker, John S, Henderson, Donnie, Howard, Richard E, Hubbard, Wayne, and Jackel, Lawrence D. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541 551, 1989. Le Cun, Yann, Bottou, L eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. Liang, Percy and Jordan, Michael I. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In Proceedings of the 25th international conference on Machine learning, pp. 584 591. ACM, 2008. Long, Jonathan L, Zhang, Ning, and Darrell, Trevor. Do convnets learn correspondence? In Advances in Neural Information Processing Systems, pp. 1601 1609, 2014. Neal, Radford M. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2, 2011. Rifai, Salah, Vincent, Pascal, Muller, Xavier, Glorot, Xavier, and Bengio, Yoshua. Contractive autoencoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 833 840, 2011. Roth, Stefan and Black, Michael J. Fields of experts. International Journal of Computer Vision, 82 (2):205 229, 2009. Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pp. 448 455, 2009. Simonyan, Karen, Vedaldi, Andrea, and Zisserman, Andrew. Deep inside convolutional networks: Visualising image classification models and saliency maps. Workshop at International Conference on Learning Representations, 2014. Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional neural networks. ar Xiv preprint ar Xiv:1311.2901, 2013. Published as a conference paper at ICLR 2015 SUPPLEMENTARY MATERIALS A. DISCRIMINATIVE VS GENERATIVE LOG-LIKELIHOOD AND GRADIENT FOR BATCH TRAINING During training, on a batch of training examples, {(xi, yi), i = 1, ..., n}, the generative loglikelihood is i log p(xi|yi, w) = X i log exp (fyi(xi; w)) i log exp (fyi(xi; w)) P k exp (fyi(xk; w)) /n. The gradient with respect to w is " wfyi(xi; w) X wfyi(xj; w) exp(fyi(xj; w)) P k exp(fyi(xk; w)) The discriminative log-likelihood is i log p(yi|xi, w) = X i log exp(fyi(xi; w)) P y exp(fy(xi; w)). The gradient with respect to w is " wfyi(xi; w) X wfy(xi; w) exp(fy(xi; w)) P y exp(fy(xi; w)) l D and l G are similar in form but different in the summation operations. In l D, the summation is over category y while xi is fixed, whereas in l G, the summation is over example xj while yi is fixed. In the generative gradient, we want fyi to assign high score to xi as well as those observations that belong to yi, but assign low scores to those observations that do not belong to yi. This constraint is for the same fyi, regardless of what other fy do for y = yi. In the discriminative gradient, we want fy(xi) to work together for all different y, so that fyi assigns high score to xi than other fy for y = yi. Apparently, the discriminative constraint is weaker because it involves all fy, and the generative constraint is stronger because it involves single fy. After generative learning, these fy are well behaved and then we can continue to refine them (including the intercepts for different y) to satisfy the discriminative constraint. B. MORE GENERATIVE VISUALIZATION EXAMPLES More generative visualization examples for the nodes at the final fully-connected layer in the fully trained Alex Net model are shown in Fig. B1, Fig. B2 and Fig. B3. Published as a conference paper at ICLR 2015 (b) Peacock Figure B1: More samples from the nodes at the final fully-connected layer (fc8) in the fully trained Alex Net model, which correspond to different object categories (part 1). Published as a conference paper at ICLR 2015 (a) Lotion bottle (c) Lawn mower (d) Hourglass Figure B2: More samples from the nodes at the final fully-connected layer (fc8) in the fully trained Alex Net model, which correspond to different object categories (part 2). Published as a conference paper at ICLR 2015 (c) Academic gown Figure B3: More samples from the nodes at the final fully-connected layer (fc8) in the fully trained Alex Net model, which correspond to different object categories (part 3).