# triangle_generative_adversarial_networks__d6cdd4ce.pdf Triangle Generative Adversarial Networks Zhe Gan , Liqun Chen , Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, Lawrence Carin Duke University zhe.gan@duke.edu A Triangle Generative Adversarial Network ( -GAN) is developed for semisupervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. -GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach. 1 Introduction Generative adversarial networks (GANs) [1] have emerged as a powerful framework for learning generative models of arbitrarily complex data distributions. When trained on datasets of natural images, significant progress has been made on generating realistic and sharp-looking images [2, 3]. The original GAN formulation was designed to learn the data distribution in one domain. In practice, one may also be interested in matching two joint distributions. This is an important task, since mapping data samples from one domain to another has a wide range of applications. For instance, matching the joint distribution of image-text pairs allows simultaneous image captioning and textconditional image generation [4], while image-to-image translation [5] is another challenging problem that requires matching the joint distribution of image-image pairs. In this work, we are interested in designing a GAN framework to match joint distributions. If paired data are available, a simple approach to achieve this is to train a conditional GAN model [4, 6], from which a joint distribution is readily manifested and can be matched to the empirical joint distribution provided by the paired data. However, fully supervised data are often difficult to acquire. Several methods have been proposed to achieve unsupervised joint distribution matching without any paired data, including Disco GAN [7], Cycle GAN [8] and Dual GAN [9]. Adversarially Learned Inference (ALI) [10] and Bidirectional GAN (Bi GAN) [11] can be readily adapted to this case as well. Though empirically achieving great success, in principle, there exist infinitely many possible mapping functions that satisfy the requirement to map a sample from one domain to another. In order to alleviate this nonidentifiability issue, paired data are needed to provide proper supervision to inform the model the kind of joint distributions that are desired. This motivates the proposed Triangle Generative Adversarial Network ( -GAN), a GAN framework that allows semi-supervised joint distribution matching, where the supervision of domain Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Illustration of the Triangle Generative Adversarial Network ( -GAN). correspondence is provided by a few paired samples. -GAN consists of two generators and two discriminators. The generators are designed to learn the bidirectional mappings between domains, while the discriminators are trained to distinguish real data pairs and two kinds of fake data pairs. Both the generators and discriminators are trained together via adversarial learning. -GAN bears close resemblance to Triple GAN [12], a recently proposed method that can also be utilized for semi-supervised joint distribution mapping. However, there exist several key differences that make our work unique. First, -GAN uses two discriminators in total, which implicitly defines a ternary discriminative function, instead of a binary discriminator as used in Triple GAN. Second, -GAN can be considered as a combination of conditional GAN and ALI, while Triple GAN consists of two conditional GANs. Third, the distributions characterized by the two generators in both -GAN and Triple GAN concentrate to the data distribution in theory. However, when the discriminator is optimal, the objective of -GAN becomes the Jensen-Shannon divergence (JSD) among three distributions, which is symmetric; the objective of Triple GAN consists of a JSD term plus a Kullback-Leibler (KL) divergence term. The asymmetry of the KL term makes Triple GAN more prone to generating fake-looking samples [13]. Lastly, the calculation of the additional KL term in Triple GAN is equivalent to calculating a supervised loss, which requires the explicit density form of the conditional distributions, which may not be desirable. On the other hand, -GAN is a fully adversarial approach that does not require that the conditional densities can be computed; -GAN only require that the conditional densities can be sampled from in a way that allows gradient backpropagation. -GAN is a general framework, and can be used to match any joint distributions. In experiments, in order to demonstrate the versatility of the proposed model, we consider three domain pairs: image-label, image-image and image-attribute pairs, and use them for semi-supervised classification, image-to-image translation and attribute-based image editing, respectively. In order to demonstrate the scalability of the model to large and complex datasets, we also present attribute-conditional image generation on the COCO dataset [14]. 2.1 Generative Adversarial Networks (GANs) Generative Adversarial Networks (GANs) [1] consist of a generator G and a discriminator D that compete in a two-player minimax game, where the generator is learned to map samples from an arbitray latent distribution to data, while the discriminator tries to distinguish between real and generated samples. The goal of the generator is to fool the discriminator by producing samples that are as close to real data as possible. Specifically, D and G are learned as min G max D V (D, G) = Ex p(x)[log D(x)] + Ez pz(z)[log(1 D(G(z)))] , (1) where p(x) is the true data distribution, and pz(z) is usually defined to be a simple distribution, such as the standard normal distribution. The generator G implicitly defines a probability distribution pg(x) as the distribution of the samples G(z) obtained when z pz(z). For any fixed generator G, the optimal discriminator is D(x) = p(x) pg(x)+p(x). When the discriminator is optimal, solving this adversarial game is equivalent to minimizing the Jenson-Shannon Divergence (JSD) between p(x) and pg(x) [1]. The global equilibrium is achieved if and only if p(x) = pg(x). 2.2 Triangle Generative Adversarial Networks ( -GANs) We now extend GAN to -GAN for joint distribution matching. We first consider -GAN in the supervised setting, and then discuss semi-supervised learning in Section 2.4. Consider two related domains, with x and y being the data samples for each domain. We have fully-paired data samples that are characterized by the joint distribution p(x, y), which also implies that samples from both the marginal p(x) and p(y) can be easily obtained. -GAN consists of two generators: (i) a generator Gx(y) that defines the conditional distribution px(x|y), and (ii) a generator Gy(x) that characterizes the conditional distribution in the other direction py(y|x). Gx(y) and Gy(x) may also implicitly contain a random latent variable z as input, i.e., Gx(y, z) and Gy(x, z). In the -GAN game, after a sample x is drawn from p(x), the generator Gy produces a pseudo sample y following the conditional distribution py(y|x). Hence, the fake data pair (x, y) is a sample from the joint distribution py(x, y) = py(y|x)p(x). Similarly, a fake data pair ( x, y) can be sampled from the generator Gx by first drawing y from p(y) and then drawing x from px(x|y); hence ( x, y) is sampled from the joint distribution px(x, y) = px(x|y)p(y). As such, the generative process between px(x, y) and py(x, y) is reversed. The objective of -GAN is to match the three joint distributions: p(x, y), px(x, y) and py(x, y). If this is achieved, we are ensured that we have learned a bidirectional mapping px(x|y) and py(y|x) that guarantees the generated fake data pairs ( x, y) and (x, y) are indistinguishable from the true data pairs (x, y). In order to match the joint distributions, an adversarial game is played. Joint pairs are drawn from three distributions: p(x, y), px(x, y) or py(x, y), and two discriminator networks are learned to discriminate among the three, while the two conditional generator networks are trained to fool the discriminators. The value function describing the game is given by min Gx,Gy max D1,D2V (Gx, Gy, D1, D2) = E(x,y) p(x,y)[log D1(x, y)] (2) + Ey p(y), x px(x|y) h log (1 D1( x, y)) D2( x, y) i + Ex p(x), y py(y|x) h log (1 D1(x, y)) (1 D2(x, y)) i . The discriminator D1 is used to distinguish whether a sample pair is from p(x, y) or not, if this sample pair is not from p(x, y), another discriminator D2 is used to distinguish whether this sample pair is from px(x, y) or py(x, y). D1 and D2 work cooperatively, and the use of both implicitly defines a ternary discriminative function D that distinguish sample pairs in three ways. See Figure 1 for an illustration of the adversarial game and Appendix B for an algorithmic description of the training procedure. 2.3 Theoretical analysis -GAN shares many of the theoretical properties of GANs [1]. We first consider the optimal discriminators D1 and D2 for any given generator Gx and Gy. These optimal discriminators then allow reformulation of objective (2), which reduces to the Jensen-Shannon divergence among the joint distribution p(x, y), px(x, y) and py(x, y). Proposition 1. For any fixed generator Gx and Gy, the optimal discriminator D1 and D2 of the game defined by V (Gx, Gy, D1, D2) is D 1(x, y) = p(x, y) p(x, y) + px(x, y) + py(x, y), D 2(x, y) = px(x, y) px(x, y) + py(x, y) . Proof. The proof is a straightforward extension of the proof in [1]. See Appendix A for details. Proposition 2. The equilibrium of V (Gx, Gy, D1, D2) is achieved if and only if p(x, y) = px(x, y) = py(x, y) with D 1(x, y) = 1 3 and D 2(x, y) = 1 2, and the optimum value is 3 log 3. Proof. Given the optimal D 1(x, y) and D 2(x, y), the minimax game can be reformulated as: C(Gx, Gy) = max D1,D2 V (Gx, Gy, D1, D2) (3) = 3 log 3 + 3 JSD p(x, y), px(x, y), py(x, y) 3 log 3 , (4) where JSD denotes the Jensen-Shannon divergence (JSD) among three distributions. See Appendix A for details. Since p(x, y) = px(x, y) = py(x, y) can be achieved in theory, it can be readily seen that the learned conditional generators can reveal the true conditional distributions underlying the data, i.e., px(x|y) = p(x|y) and py(y|x) = p(y|x). 2.4 Semi-supervised learning In order to further understand -GAN, we write (2) as V = Ep(x,y)[log D1(x, y)] + Epx( x,y)[log(1 D1( x, y))] + Epy(x, y)[log(1 D1(x, y))] | {z } conditional GAN + Epx( x,y)[log D2( x, y)] + Epy(x, y)[log(1 D2(x, y))] | {z } Bi GAN/ALI The objective of -GAN is a combination of the objectives of conditional GAN and Bi GAN. The Bi GAN part matches two joint distributions: px(x, y) and py(x, y), while the conditional GAN part provides the supervision signal to notify the Bi GAN part what joint distribution to match. Therefore, -GAN provides a natural way to perform semi-supervised learning, since the conditional GAN part and the Bi GAN part can be used to account for paired and unpaired data, respectively. However, when doing semi-supervised learning, there is also one potential problem that we need to be cautious about. The theoretical analysis in Section 2.3 is based on the assumption that the dataset is fully supervised, i.e., we have the ground-truth joint distribution p(x, y) and marginal distributions p(x) and p(y). In the semi-supervised setting, p(x) and p(y) are still available but p(x, y) is not. We can only obtain the joint distribution pl(x, y) characterized by the few paired data samples. Hence, in the semi-supervised setting, px(x, y) and py(x, y) will try to concentrate to the empirical distribution pl(x, y). We make the assumption that pl(x, y) p(x, y), i.e., the paired data can roughly characterize the whole dataset. For example, in the semi-supervised classification problem, one usually strives to make sure that labels are equally distributed among the labeled dataset. 2.5 Relation to Triple GAN -GAN is closely related to Triple GAN [12]. Below we review Triple GAN and then discuss the main differences. The value function of Triple GAN is defined as follows: V =Ep(x,y)[log D(x, y)] + (1 α)Epx( x,y)[log(1 D( x, y))] + αEpy(x, y)[log(1 D(x, y))] +Ep(x,y)[ log py(y|x)] , (7) where α (0, 1) is a contant that controls the relative importance of the two generators. Let Triple GAN-s denote a simplified Triple GAN model with only the first three terms. As can be seen, Triple GAN-s can be considered as a combination of two conditional GANs, with the importance of each condtional GAN weighted by α. It can be proven that Triple GAN-s achieves equilibrium if and only if p(x, y) = (1 α)px(x, y) + αpy(x, y), which is not desirable. To address this problem, in Triple GAN a standard supervised loss RL = Ep(x,y)[ log py(y|x)] is added. As a result, when the discriminator is optimal, the cost function in Triple GAN becomes: 2JSD p(x, y)||((1 α)px(x, y) + αpy(x, y)) + KL(p(x, y)||py(x, y)) + const. (8) This cost function has the good property that it has a unique minimum at p(x, y) = px(x, y) = py(x, y). However, the objective becomes asymmetrical. The second KL term pays low cost for generating fake-looking samples [13]. By contrast -GAN directly optimizes the symmetric Jensen-Shannon divergence among three distributions. More importantly, the calculation of Ep(x,y)[ log py(y|x)] in Triple GAN also implies that the explicit density form of py(y|x) should be provided, which may not be desirable. On the other hand, -GAN only requires that py(y|x) can be sampled from. For example, if we assume py(y|x) = R δ(y Gy(x, z))p(z)dz, and δ( ) is the Dirac delta function, we can sample y through sampling z, however, the density function of py(y|x) is not explicitly available. 2.6 Applications -GAN is a general framework that can be used for any joint distribution matching. Besides the semi-supervised image classification task considered in [12], we also conduct experiments on image-to-image translation and attribute-conditional image generation. When modeling image pairs, both px(x|y) and py(y|x) are implemented without introducing additional latent variables, i.e., px(x|y) = δ(x Gx(y)), py(y|x) = δ(y Gy(x)). A different strategy is adopted when modeling the image-label/attribute pairs. Specifically, let x denote samples in the image domain, y denote samples in the label/attribute domain. y is a one-hot vector or a binary vector when representing labels and attributes, respectively. When modeling px(x|y), we assume that x is transformed by the latent style variables z given the label or attribute vector y, i.e., px(x|y) = R δ(x Gx(y, z))p(z)dz, where p(z) is chosen to be a simple distribution (e.g., uniform or standard normal). When learning py(y|x), py(y|x) is assumed to be a standard multi-class or multi-label classfier without latent variables z. In order to allow the training signal backpropagated from D1 and D2 to Gy, we adopt the REINFORCE algorithm as in [12], and use the label with the maximum probability to approximate the expectation over y, or use the output of the sigmoid function as the predicted attribute vector. 3 Related work The proposed framework focuses on designing GAN for joint-distribution matching. Conditional GAN can be used for this task if supervised data is available. Various conditional GANs have been proposed to condition the image generation on class labels [6], attributes [15], texts [4, 16] and images [5, 17]. Unsupervised learning methods have also been developed for this task. Bi GAN [11] and ALI [10] proposed a method to jointly learn a generation network and an inference network via adversarial learning. Though originally designed for learning the two-way transition between the stochastic latent variables and real data samples, Bi GAN and ALI can be directly adapted to learn the joint distribution of two real domains. Another method is called Disco GAN [7], in which two generators are used to model the bidirectional mapping between domains, and another two discriminators are used to decide whether a generated sample is fake or not in each individual domain. Further, additional reconstructon losses are introduced to make the two generators strongly coupled and also alleviate the problem of mode collapsing. Similiar work includes Cycle GAN [8], Dual GAN [9] and DTN [18]. Additional weight-sharing constraints are introduced in Co GAN [19] and UNIT [20]. Our work differs from the above work in that we aim at semi-supervised joint distribution matching. The only work that we are aware of that also achieves this goal is Triple GAN. However, our model is distinct from Triple GAN in important ways (see Section 2.5). Further, Triple GAN only focuses on image classification, while -GAN has been shown to be applicable to a wide range of applications. Various methods and model architectures have been proposed to improve and stabilize the training of GAN, such as feature matching [21, 22, 23], Wasserstein GAN [24], energy-based GAN [25], and unrolled GAN [26] among many other related works. Our work is orthogonal to these methods, which could also be used to improve the training of -GAN. Instead of using adversarial loss, there also exists work that uses supervised learning [27] for joint-distribution matching, and variational autoencoders for semi-supervised learning [28, 29]. Lastly, our work is also closely related to the recent work of [30, 31, 32], which treats one of the domains as latent variables. 4 Experiments We present results on three tasks: (i) semi-supervised classification on CIFAR10 [33]; (ii) imageto-image translation on MNIST [34] and the edges2shoes dataset [5]; and (iii) attribute-to-image generation on Celeb A [35] and COCO [14]. We also conduct a toy data experiment to further demonstrate the differences between -GAN and Triple GAN. We implement -GAN without introducing additional regularization unless explicitly stated. All the network architectures are provided in the Appendix. (a) real data (b) Triangle GAN (c) Triple GAN Figure 2: Toy data experiment on -GAN and Triple GAN. (a) the joint distribution p(x, y) of real data. For (b) and (c), the left and right figure is the learned joint distribution px(x, y) and py(x, y), respectively. Table 1: Error rates (%) on the partially labeled CIFAR10 dataset. Algorithm n = 4000 Cat GAN [36] 19.58 0.58 Improved GAN [21] 18.63 2.32 ALI [10] 17.99 1.62 Triple GAN [12] 16.99 0.36 -GAN (ours) 16.80 0.42 Table 2: Classification accuracy (%) on the MNIST-to MNIST-transpose dataset. Algorithm n = 100 n = 1000 All Disco GAN 15.00 0.20 Triple GAN 63.79 0.85 84.93 1.63 86.70 1.52 -GAN 83.20 1.88 88.98 1.50 93.34 1.46 4.1 Toy data experiment We first compare our method with Triple GAN on a toy dataset. We synthesize data by drawing (x, y) 1 4N(µ1, Σ1) + 1 4N(µ2, Σ2) + 1 4N(µ3, Σ3) + 1 4N(µ4, Σ4), where µ1 = [0, 1.5] , µ2 = [ 1.5, 0] , µ3 = [1.5, 0] , µ4 = [0, 1.5] , Σ1 = Σ4 = ( 3 0 0 0.025 ) and Σ2 = Σ3 = ( 0.025 0 0 3 ). We generate 5000 (x, y) pairs for each mixture component. In order to implement -GAN and Triple GAN-s, we model px(x|y) and py(y|x) as px(x|y) = R δ(x Gx(y, z))p(z)dz, py(y|x) = R δ(y Gy(x, z))p(z)dz where both Gx and Gy are modeled as a 4-hidden-layer multilayer perceptron (MLP) with 500 hidden units in each layer. p(z) is a bivariate standard Gaussian distribution. Triple GAN can be implemented by specifying both px(x|y) and py(y|x) to be distributions with explicit density form, e.g., Gaussian distributions. However, the performance can be bad since it fails to capture the multi-modality of px(x|y) and py(y|x). Hence, only Triple GAN-s is implemented. Results are shown in Figure 2. The joint distributions px(x, y) and py(x, y) learned by -GAN successfully match the true joint distribution p(x, y). Triple GAN-s cannot achieve this, and can only guarantee 1 2(px(x, y) + py(x, y)) matches p(x, y). Although this experiment is limited due to its simplicity, the results clearly support the advantage of our proposed model over Triple GAN. 4.2 Semi-supervised classification We evaluate semi-supervised classification on the CIFAR10 dataset with 4000 labels. The labeled data is distributed equally across classes and the results are averaged over 10 runs with different random splits of the training data. For fair comparison, we follow the publically available code of Triple GAN and use the same regularization terms and hyperparameter settings as theirs. Results are summarized in Table 1. Our -GAN achieves the best performance among all the competing methods. We also show the ability of -GAN to disentangle classes and styles in Figure 3. -GAN can generate realistic data in a specific class and the injected noise vector encodes meaningful style patterns like background and color. 4.3 Image-to-image translation We first evaluate image-to-image translation on the edges2shoes dataset. Results are shown in Figure 4(bottom). Though Disco GAN is an unsupervised learning method, it achieves impressive results. However, with supervision provided by 10% paired data, -GAN generally generates more accurate edge details of the shoes. In order to provide quantitative evaluation of translating shoes to edges, we use mean squared error (MSE) as our metric. The MSE of using Disco GAN is 140.1; with 10%, 20%, 100% paired data, the MSE of using -GAN is 125.3, 113.0 and 66.4, respectively. To further demonstrate the importance of providing supervision of domain correspondence, we created a new dataset based on MNIST [34], where the two image domains are the MNIST images and their corresponding tranposed ones. As can be seen in Figure 4(top), -GAN matches images Figure 3: Generated CIFAR10 samples, where each row shares the same label and each column uses the same noise. -GAN Disco GAN Figure 4: Image-to-image translation experiments on the MNIST-to-MNIST-transpose and edges2shoes datasets. Big Nose, Black Hair, Bushy Eyebrows, Male, Young, Sideburns Attractive, Smiling, High Cheekbones, Mouth Slightly Open, Wearing Lipstick Attractive, Black Hair, Male, High Cheekbones, Smiling, Straight Hair Big Nose, Chubby, Goatee, Male, Oval Face, Sideburns, Wearing Hat Attractive, Blond Hair, No Beard, Pointy Nose, Straight Hair, Arched Eyebrows High Cheekbones, Mouth Slightly Open, No Beard, Oval Face, Smiling Attractive, Brown Hair, Heavy Makeup, No Beard, Wavy Hair, Young Attractive, Eyeglasses, No Beard, Straight Hair, Wearing Lipstick, Young Input images Predicted attributes Figure 5: Results on the face-to-attribute-to-face experiment. The 1st row is the input images; the 2nd row is the predicted attributes given the input images; the 3rd row is the generated images given the predicted attributes. Table 3: Results of P@10 and n DCG@10 for attribute predicting on Celeb A and COCO. Dataset Celeb A COCO Method 1% 10% 100% 10% 50% 100% Triple GAN 40.97/50.74 62.13/73.56 70.12/79.37 32.64/35.91 34.00/37.76 35.35/39.60 -GAN 53.21/58.39 63.68/75.22 70.37/81.47 34.38/37.91 36.72/40.39 39.05/42.86 betwen domains well, while Disco GAN fails in this task. For supporting quantitative evaluation, we have trained a classifier on the MNIST dataset, and the classification accuracy of this classifier on the test set approaches 99.4%, and is, therefore, trustworthy as an evaluation metric. Given an input MNIST image x, we first generate a transposed image y using the learned generator, and then manually transpose it back to normal digits y T , and finally send this new image y T to the classifier. Results are summarized in Table 2, which are averages over 5 runs with different random splits of the training data. -GAN achieves significantly better performance than Triple GAN and Disco GAN. 4.4 Attribute-conditional image generation We apply our method to face images from the Celeb A dataset. This dataset consists of 202,599 images annotated with 40 binary attributes. We scale and crop the images to 64 64 pixels. In order to qualitatively evaluate the learned attribute-conditional image generator and the multi-label classifier, given an input face image, we first use the classifier to predict attributes, and then use the image generator to produce images based on the predicted attributes. Figure 5 shows example results. Both the learned attribute predictor and the image generator provides good results. We further show another set of image editing experiment in Figure 6. For each subfigure, we use a same set of attributes with different noise vectors to generate images. For example, for the top-right subfigure, 1st row + pale skin = 2nd row 1st row + mouth slightly open = 2nd row 1st row + eyeglasses = 2nd row 1st row + wearing hat = 2nd row Figure 6: Results on the image editing experiment. Input Predicted attributes Generated images Input Predicted attributes Generated images baseball, standing, next, player, man, group, person, field, sport, ball, outdoor, game, grass, crowd ! tennis, player, court, man, playing, field, racket, sport, swinging, ball, outdoor, holding, game, grass surfing, people, woman, water, standing, wave, man, top, riding, sport, ocean, outdoor, board! skiing, man, group, covered, day, hill, person, snow, riding, outdoor red, sign, street, next, pole, outdoor, stop, grass ! pizza, rack, blue, grill, plate, stove, table, pan, holding, pepperoni, cooked sink, shower, indoor, tub, restroom, bathroom, small, standing, room, tile, white, stall, tiled, black, bath ! computer, laptop, room, front, living, indoor, table, desk Figure 7: Results on the image-to-attribute-to-image experiment. all the images in the 1st row were generated based on the following attributes: black hair, female, attractive, and we then added the attribute of sunglasses when generating the images in the 2nd row. It is interesting to see that -GAN has great flexibility to adjust the generated images by changing certain input attribtutes. For instance, by switching on the wearing hat attribute, one can edit the face image to have a hat on the head. In order to demonstrate the scalablility of our model to large and complex datasets, we also present results on the COCO dataset. Following [37], we first select a set of 1000 attributes from the caption text in the training set, which includes the most frequent nouns, verbs, or adjectives. The images in COCO are scaled and cropped to have 64 64 pixels. Unlike the case of Celeb A face images, the networks need to learn how to handle multiple objects and diverse backgrounds. Results are provided in Figure 7. We can generate reasonably good images based on the predicted attributes. The input and generated images also clearly share a same set of attributes. We also observe diversity in the samples by simply drawing multple noise vectors and using the same predicted attributes. Precision (P) and normalized Discounted Cumulative Gain (n DCG) are two popular evaluation metrics for multi-label classification problems. Table 3 provides the quantatitive results of P@10 and n DCG@10 on Celeb A and COCO, where @k means at rank k (see the Appendix for definitions). For fair comparison, we use the same network architecures for both Triple GAN and -GAN. -GAN consistently provides better results than Triple GAN. On the COCO dataset, our semi-supervised learning approach with 50% labeled data achieves better performance than the results of Triple GAN using the full dataset, demonstrating the effectiveness of our approach for semi-supervised joint distribution matching. More results for the above experiments are provided in the Appendix. 5 Conclusion We have presented the Triangle Generative Adversarial Network ( -GAN), a new GAN framework that can be used for semi-supervised joint distribution matching. Our approach learns the bidirectional mappings between two domains with a few paired samples. We have demonstrated that -GAN may be employed for a wide range of applications. One possible future direction is to combine -GAN with sequence GAN [38] or text GAN [23] to model the joint distribution of image-caption pairs. Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR. [1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [2] Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. [3] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. [4] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In ICML, 2016. [5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. [6] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. ar Xiv:1411.1784, 2014. [7] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover crossdomain relations with generative adversarial networks. In ICML, 2017. [8] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017. [9] Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV, 2017. [10] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. In ICLR, 2017. [11] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2017. [12] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. In NIPS, 2017. [13] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. [14] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. [15] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M Álvarez. Invertible conditional gans for image editing. ar Xiv:1611.06355, 2016. [16] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. [17] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. In CVPR, 2017. [18] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. In ICLR, 2017. [19] Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NIPS, 2016. [20] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017. [21] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. [22] Yizhe Zhang, Zhe Gan, and Lawrence Carin. Generating text via adversarial training. In NIPS workshop on Adversarial Training, 2016. [23] Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial feature matching for text generation. In ICML, 2017. [24] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. ar Xiv:1701.07875, 2017. [25] Junbo Zhao, Michael Mathieu, and Yann Le Cun. Energy-based generative adversarial network. In ICLR, 2017. [26] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In ICLR, 2017. [27] Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. Dual supervised learning. In ICML, 2017. [28] Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [29] Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, and Lawrence Carin. Vae learning via stein variational gradient descent. In NIPS, 2017. [30] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, and Lawrence Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017. [31] Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, and Lawrence Carin. Adversarial symmetric variational autoencoder. In NIPS, 2017. [32] Yunchen Pu, Liqun Chen, Shuyang Dai, Weiyao Wang, Chunyuan Li, and Lawrence Carin. Symmetric variational autoencoder and connections to adversarial learning. In NIPS, 2017. [33] Alex Krizhevsky. Learning multiple layers of features from tiny images. Citeseer, 2009. [34] Yann Le Cun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. [35] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. [36] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. ar Xiv:1511.06390, 2015. [37] Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. Semantic compositional networks for visual captioning. In CVPR, 2017. [38] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: sequence generative adversarial nets with policy gradient. In AAAI, 2017.