# controllable_texttoimage_generation__81051921.pdf Controllable Text-to-Image Generation Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr University of Oxford {bowen.li, thomas.lukasiewicz}@cs.ox.ac.uk {xiaojuan.qi, philip.torr}@eng.ox.ac.uk In this paper, we propose a novel controllable text-to-image generative adversarial network (Control GAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/Control GAN. 1 Introduction Generating realistic images that semantically match given text descriptions is a challenging problem and has tremendous potential applications, such as image editing, video games, and computer-aided design. Recently, thanks to the success of generative adversarial networks (GANs) [4, 6, 15] in generating realistic images, text-to-image generation has made remarkable progress [16, 25, 27] by implementing conditional GANs (c GANs) [5, 16, 17], which are able to generate realistic images conditioned on given text descriptions. However, current generative networks are typically uncontrollable, which means that if users change some words of a sentence, the synthetic image would be significantly different from the one generated from the original text as shown in Fig. 1. When the given text description (e.g., colour) is changed, corresponding visual attributes of the bird are modified, but other unrelated attributes (e.g., the pose and position) are changed as well. This is typically undesirable in real-world applications, when a user wants to further modify the synthetic image to satisfy her preferences. The goal of this paper is to generate images from text, and also allow the user to manipulate synthetic images using natural language descriptions, in one framework. In particular, we focus on modifying visual attributes (e.g., category, texture, and colour) of objects in the generated images by changing given text descriptions. To achieve this, we propose a novel controllable text-to-image generative adversarial network (Control GAN), which can synthesise high-quality images, and also allow the user to manipulate objects attributes, without affecting the generation of other content. Our Control GAN contains three novel components. The first component is the word-level spatial and channel-wise attention-driven generator, where an attention mechanism is exploited to allow the 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. This bird has a yellow back and rump, gray outer rectrices, and a light gray breast. (original text) This bird has a red back and rump, yellow outer rectrices, and a light white breast. (modified text) Text [27] [25] Ours Original Figure 1: Examples of modifying synthetic images using a natural language description. The current state of the art methods generate realistic images, but fail to generate plausible images when we slightly change the text. In contrast, our method allows parts of the image to be manipulated in correspondence to the modified text description while preserving other unrelated content. generator to synthesise subregions corresponding to the most relevant words. Our generator follows a multi-stage architecture [25, 28] that synthesises images from coarse to fine, and progressively improves the quality. The second component is a word-level discriminator, where the correlation between words and image subregions is explored to disentangle different visual attributes, which can provide the generator with fine-grained training signals related to visual attributes. The third component is the adoption of the perceptual loss [7] in text-to-image generation, which can reduce the randomness involved in the generation, and enforce the generator to preserve visual appearance related to the unmodified text. To this end, an extensive analysis is performed, which demonstrates that our method can effectively disentangle different attributes and accurately manipulate parts of the synthetic image without losing diversity. Also, experimental results on the CUB [23] and COCO [10] datasets show that our method outperforms existing state of the art both qualitatively and quantitatively. 2 Related Work Text-to-image Generation. Recently, there has been a lot of work and interest in text-to-image generation. Mansimov et al. [11] proposed the Align DRAW model that used an attention mechanism over words of a caption to draw image patches in multiple stages. Nguyen et al. [13] introduced an approximate Langevin approach to synthesise images from text. Reed et al. [16] first applied the c GAN to generate plausible images conditioned on text descriptions. Zhang et al. [27] decomposed text-to-image generation into several stages generating image from coarse to fine. However, all above approaches mainly focus on generating a new high-quality image from a given text, and cannot allow the user to manipulate the generation of specific visual attributes using natural language descriptions. Image-to-image translation. Our work is also closely related to conditional image manipulation methods. Cheng et al. [3] produced high-quality image parsing results from verbal commands. Zhu et al. [31] proposed to change the colour and shape of an object by manipulating latent vectors. Brock et al. [2] introduced a hybrid model using VAEs [9] and GANs, which achieved an accurate reconstruction without loss of image quality. Recently, Nam et al. [12] built a model for multi-modal learning on both text descriptions and input images, and proposed a text-adaptive discriminator which utilised word-level text-image matching scores as supervision. However, they adopted a global pooling layer to extract image features, which may lose important fine-grained spatial information. Moreover, the above approaches focus only on image-to-image translation instead of text-to-image generation, which is probably more challenging. Attention. The attention mechanism has shown its efficiency in various research fields including image captioning [24, 30], machine translation [1], object detection [14, 29], and visual question answering [26]. It can effectively capture task-relevant information and reduce the interference from less important one. Recently, Xu et al. [25] built the Attn GAN model that designed a word-level spatial attention to guide the generator to focus on subregions corresponding to the most relevant word. However, spatial attention only correlates words with partial regions without taking channel (a) word-level spatial and channel-wise attention-driven generator word features spatial attention channel-wise attention A bird with a red breast, red eyebrow and a Image Encoder Text Encoder word-level discriminator (b) discriminator real/fake real Text Encoder features function layer spatial attention channel-wise (c) perceptual loss netwrok s Figure 2: The architecture of our proposed Control GAN. In (b), Lcorre is the correlation loss discussed in Sec. 3.3. In (c), Lper is the perceptual loss discussed in Sec. 3.4. information into account. Also, different channels of features in CNNs may have different purposes, and it is crucial to avoid treating all channels without distinction, such that the most relevant channels in the visual features can be fully exploited. 3 Controllable Generative Adversarial Networks Given a sentence S, we aim to synthesise a realistic image I that semantically aligns with S (see Fig. 2), and also make this generation process controllable if S is modified to be Sm, the synthetic result I should semantically match Sm while preserving irrelevant content existing in I (shown in Fig. 4). To achieve this, we propose three novel components: 1) a channel-wise attention module, 2) a word-level discriminator, and 3) the adoption of the perceptual loss in the text-to-image generation. We elaborate our model as follows. 3.1 Architecture We adopt the multi-stage Attn GAN [25] as our backbone architecture (see Fig. 2). Given a sentence S, the text encoder a pre-trained bidirectional RNN [25] encodes the sentence S into a sentence feature s RD with dimension D describing the whole sentence, and word features w RD L with length L (i.e., number of words) and dimension D. Following [27], we also apply conditioning augmentation (CA) to s. The augmented sentence feature s is further concatenated with a random vector z to serve as the input to the first stage. The overall framework generates an image from coarseto fine-scale in multiple stages, and, in each stage, the network produces a hidden visual feature vi, which is the input to the corresponding generator Gi to produce a synthetic image. Spatial attention [25] and our proposed channel-wise attention modules take w and vi as inputs, and output attentive word-context features. These attentive features are further concatenated with the hidden feature vi and then serve as input for the next stage. The generator exploits the attention mechanism via incorporating a spatial attention module [25] and the proposed channel-wise attention module. The spatial attention module [25] can only correlate words with individual spatial locations without taking channel information into account. Thus, we introduce a channel-wise attention module (see Sec. 3.2) to exploit the connection between words and channels. We experimentally find that the channel-wise attention module highly correlates semantically meaningful parts with corresponding words, while the spatial attention focuses on colour descriptions (see Fig. 6). Therefore, our proposed channel-wise attention module, together with the spatial attention, can help the generator disentangle different visual attributes, and allow it to focus only on the most relevant subregions and channels. wk : (Hk * Wk) L C (Hk * Wk) L (Hk * Wk) (a) channel-wise attention (b) word-level discriminator Mul. Cosine Similarity Sigmoid m : L (H * W ) n: D (H * W ) Trans. Transpose Hadamard Product Repeat Along Row Direction Matrix Multiplication Fk F C (Hk * Wk) β : L (H * W ) Self Attention Fk, F Perception Layer Figure 3: The architecture of proposed channel-wise attention module and word-level discriminator. 3.2 Channel-Wise Attention At the kth stage, the channel-wise attention module (see Fig. 3 (a)) takes two inputs: the word features w and hidden visual features vk RC (Hk Wk), where Hk and Wk denote the height and width of the feature map at stage k. The word features w are first mapped into the same semantic space as the visual features vk via a perception layer Fk, producing wk = Fkw, where Fk R(Hk Wk) D. Then, we calculate the channel-wise attention matrix mk RC L by multiplying the converted word features wk and visual features vk, denoted as mk = vk wk. Thus, mk aggregates correlation values between channels and words across all spatial locations. Next, mk is normalised by the softmax function to generate the normalised channel-wise attention matrix αk as αk i,j = exp(mk i,j) PL 1 l=0 exp(mk i,l) . (1) The attention weight αk i,j represents the correlation between the ith channel in the visual features vk and the jth word in the sentence S, and higher value means larger correlation. Equipped with the channel-wise attention matrix αk, we obtain the final channel-wise attention features f α k RC (Hk Wk), denoted as f α k = αk( wk)T . Each channel in f α k is a dynamic representation weighted by the correlation between words and corresponding channels in the visual features. Thus, channels with high correlation values are enhanced resulting in a high response to corresponding words, which can facilitate disentangling word attributes into different channels, and also reduce the influence from irrelevant channels by assigning a lower correlation. 3.3 Word-Level Discriminator To encourage the generator to modify only parts of the image according to the text, the discriminator should provide the generator with fine-grained training feedback, which can guide the generation of subregions corresponding to the most relevant words. Actually, the text-adaptive discriminator [12] also explores the word-level information in the discriminator, but it adopts a global average pooling layer to output a 1D vector as image feature, and then calculates the correlation between image feature and each word. By doing this, the image feature may lose important spatial information, which provides crucial cues for disentangling different visual attributes. To address the issue, we propose a novel word-level discriminator inspired by [12] to explore the correlation between image subregions and each word; see Fig. 3 (b). Our word-level discriminator takes two inputs: 1) word features w, w encoded from the text encoder, which follows the same architecture as the one (see Fig. 2 (a)) used in the generator, where w and w denote word features encoded from the original text S and a randomly sampled mismatched text, respectively, and 2) visual features nreal, nfake, both encoded by a Google Net-based [22] image encoder from the real image I and generated images I , respectively. For simplicity, in the following, we use n RC (H W ) to represent visual features nreal and nfake, and use w RD L for both original and mismatched word features. The word-level discriminator contains a perception layer F that is used to align the channel dimension of visual feature n and word feature w, denoted as n = F n, where F RD C is a weight matrix to learn. Then, the word-context correlation matrix m RL (H W ) can be derived via m = w T n, and is further normalised by the softmax function to get a correlation matrix β: βi,j = exp(mi,j) P(H W ) 1 l=0 exp(mi,l) , (2) where βi,j represents the correlation value between the ith word and the jth subregion of the image. Then, the image subregion-aware word features b RD L can be obtained by b = nβT , which aggregates all spatial information weighted by the word-context correlation matrix β. Additionally, to further reduce the negative impact of less important words, we adopt the word-level self-attention [12] to derive a 1D vector γ with length L reflecting the relative importance of each word. Then, we repeat γ by D times to produce γ , which has the same size as b. Next, b is further reweighted by γ to get b, denoted as b = b γ , where represents element-wise multiplication. Finally, we derive the correlation between the ith word and the whole image as Eq. (3): ri = σ( ( bi)T wi || bi|| ||wi|| ), (3) where σ is the sigmoid function, ri evaluates the correlation between the ith word and the image, and bi and wi represent the ith column of b and w, respectively. Therefore, the final correlation value Lcorre between image I and sentence S is calculated by summing all word-context correlations, denoted as Lcorre(I, S) = PL 1 i=0 ri. By doing so, the generator can receive fine-grained feedback from the word-level discriminator for each visual attribute, which can further help supervise the generation and manipulation of each subregion independently. 3.4 Perceptual Loss Without adding any constraint on text-irrelevant regions (e.g., backgrounds), the generated results can be highly random, and may also fail to be semantically consistent with other content. To mitigate this randomness, we adopt the perceptual loss [7] based on a 16-layer VGG network [21] pre-trained on the Image Net dataset [18]. The network is used to extract semantic features from both the generated image I and the real image I, and the perceptual loss is defined as Lper(I , I) = 1 Ci Hi Wi φi(I ) φi(I) 2 2 , (4) where φi(I) is the activation of the ith layer of the VGG network, and Hi and Wi are the height and width of the feature map, respectively. To our knowledge, we are the first to apply the perceptual loss [7] in controllable text-to-image generation, which can reduce the randomness involved in the image generation by matching feature space. 3.5 Objective Functions The generator and discriminator are trained alternatively by minimising both the generator loss LG and discriminator loss LD. Generator objective. The generator loss LG as Eq. (5) contains an adversarial loss LGk, a textimage correlation loss Lcorre, a perceptual loss Lper, and a text-image matching loss LDAMSM [25]. k=1 (LGk + λ2Lper(Ik , Ik) + λ3 log(1 Lcorre(I k, S))) + λ4LDAMSM, (5) where K is the number of stages, Ik is the real image sampled from the true image distribution Pdata at stage k, Ik is the generated image at the kth stage sampled from the model distribution PGk, λ2, λ3, λ4 are hyper-parameters controlling different losses, Lper is the perceptual loss described in Sec. 3.4, which puts constraint on the generation process to reduce the randomness, the LDAMSM [25] is used to measure text-image matching score based on the cosine similarity, and Lcorre reflects the correlation between the generated image and the given text description considering spatial information. The adversarial loss LGk is composed of the unconditional and conditional adversarial losses shown in Eq. (6): the unconditional adversarial loss is applied to make the synthetic image be real, and the conditional adversarial loss is utilised to make the generated image match the given text S. 2EIk P Gk log(Dk(Ik )) | {z } unconditional adversarial loss 2EIk P Gk log(Dk(Ik , S)) | {z } conditional adversarial loss Discriminator objective. The final loss function for training the discriminator D is defined as: k=1 (LDk + λ1(log(1 Lcorre(Ik, S)) + log Lcorre(Ik, S ))), (7) where Lcorre is the correlation loss determining whether word-related visual attributes exist in the image (see Sec. 3.3), S is a mismatched text description that is randomly sampled from the dataset and is irrelevant to Ik, and λ1 is a hyper-parameter controlling the importance of additional losses. The adversarial loss LDk contains two components: the unconditional adversarial loss determines whether the image is real, and the conditional adversarial loss determines whether the given image matches the text description S: 2EIk Pdata [log(Dk(Ik))] 1 2EIk P Gk log(1 Dk(Ik )) | {z } unconditional adversarial loss 2EIk Pdata [log(Dk(Ik, S))] 1 2EIk P Gk log(1 Dk(Ik , S)) | {z } conditional adversarial loss 4 Experiments To evaluate the effectiveness of our approach, we conduct extensive experiments on the CUB bird [23] and the MS COCO [10] datasets. We compare with two state of the art GAN methods on text-to-image generation, Stack GAN++ [28] and Attn GAN [25]. Results for the state of the art are reproduced based on the code released by the authors. 4.1 Datasets Our method is evaluated on the CUB bird [23] and the MS COCO [10] datasets. The CUB dataset contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. As for the COCO dataset, it contains 82,783 training images and 40,504 validation images, and each image has 5 corresponding text descriptions. We preprocess these two datasets based on the methods introduced in [27]. 4.2 Implementation There are three stages (K = 3) in our Control GAN generator following [25]. The three scales are 64 64, 128 128, and 256 256, and spatial and channel-wise attentions are applied at the stages 2 and 3. The text encoder is a pre-trained bidirectional LSTM [20] to encode the given text description into a sentence feature with dimension 256 and word features with length 18 and dimension 256. In the perceptual loss, we compute the content loss at layer relu2_2 of VGG-16 [21] pre-trained on the Image Net [18]. The whole network is trained using the Adam optimiser [8] with the learning rate 0.0002. The hyper-parameters λ1, λ2, λ3, and λ4 are set to 0.5, 1, 1, and 5 for both datasets, respectively. Table 1: Quantitative comparison: Inception Score, R-precision, and L2 reconstruction error of state of the art and Control GAN on the CUB and COCO datasets. Method IS Top-1 Acc(%) L2 error IS Top-1 Acc(%) L2 error Stack GAN++ 4.04 .05 45.28 3.72 0.29 8.30 .10 72.83 3.17 0.32 Attn GAN 4.36 .03 67.82 4.43 0.26 25.89 .47 85.47 3.69 0.40 Ours 4.58 .09 69.33 3.23 0.18 24.06 .60 82.43 2.43 0.17 4.3 Comparison with State of the Art Quantitative results. We adopt the Inception Score [19] to evaluate the quality and diversity of the generated images. However, as the Inception Score cannot reflect the relevance between an image and a text description, we utilise R-precision [25] to measure the correlation between a generated image and its corresponding text. We compare the top-1 text-to-image retrieval accuracy (Top-1 Acc) on the CUB and COCO datasets following [12]. Quantitative results are shown in Table 1, our method achieves better IS and R-precision values on the CUB dataset compared with the state of the art, and has a competitive performance on the COCO dataset. This indicates that our method can generate higher-quality images with better diversity, which semantically align with the text descriptions. To further evaluate whether the model can generate controllable results, we compute the L2 reconstruction error [12] between the image generated from the original text and the one from the modified text shown in Table 1. Compared with other methods, Control GAN achieves a significantly lower reconstruction error, which demonstrates that our method can better preserve content in the image generated from the original text. Qualitative results. We show qualitative comparisons in Fig. 4. As we can see, according to modifying given text descriptions, our approach can successfully manipulate specific visual attributes accurately. Also, our method can even handle out-of-distribution queries, e.g., red zebra on a river shown in the last two columns of Fig. 4. All the above indicates that our approach can manipulate different visual attributes independently, which demonstrates the effectiveness of our approach in disentangling visual attributes for text-to-image generation. Fig. 5 shows the visual comparison between Control GAN, Attn GAN [25], and Stack GAN++ [28]. It can be observed that when the text is modified, the two compared approaches are more likely to generate new content, or change some visual attributes that are not relevant to the modified text. For instance, as shown in the first two columns, when we modify the colour attributes, Stack GAN++ changes the pose of the bird, and Attn GAN generates new background. In contrast, our approach is able to accurately manipulate parts of the image generation corresponding to the modified text, while preserving the visual attributes related to unchanged text. In the COCO dataset, our model again achieves much better results compared with others shown in Fig. 5. For example, as shown in the last four columns, the compared approaches cannot preserve the shape of objects and even fail to generate reasonable images. Generally speaking, the results on COCO are not as good as on the CUB dataset. We attribute this to the few text-image pairs and more abstract captions in the dataset. Although there are a lot of categories in COCO, each category only has a few number of examples, and captions focus mainly on the category of objects rather than detailed descriptions, which makes text-to-image generation more challenging. 4.4 Component Analysis Effectiveness of channel-wise attention. Our model implements channel-wise attention in the generator, together the spatial attention, to generate realistic images. To better understand the effectiveness of attention mechanisms, we visualise the intermediate results and corresponding attention maps at different stages. We experimentally find that the channel-wise attention correlates closely with semantic parts of objects, while the spatial attention focuses mainly on colour descriptions. Fig. 6 shows several This bird is yellow with black and has a very short beak. This bird is orange with grey and has a very short beak. The small bird has a dark brown head and light brown body. The small bird has a dark tan head and light grey body. A large group of cows on a farm. A large group of white cows on a farm. A crowd of people fly kites on a hill. A crowd of people fly kites on a park. A group of zebras on a grassy field with trees in background. A group of red zebras on a river with trees in background. Figure 4: Qualitative results on the CUB and COCO datasets. Odd-numbered columns show the original text and even-numbered ones the modified text. The last two are an out-of-distribution case. This bird has a white neck and breast with a turquoise crown and feathers a small short beak. This bird has a grey neck and breast with a blue crown and feathers a small short beak. This bird has wings that are yellow and has a brown body. This bird has wings that are black and has a red body. A giraffe is standing on the dirt. A giraffe is standing on the dirt in an enclosure. A zebra stands on a pathway near grass. A sheep stands on a pathway near grass. Stack GAN++ [28] Attn GAN [25] Figure 5: Qualitative comparison of three methods on the CUB and COCO datasets. Odd-numbered columns show the original text and even-numbered ones the modified text. Generated Image Input Text: A yellow bird, with green flank, a white belly, a yellow crown, and a black bill. Generated Image bill with 07 23 Input Text: This white and black bird with a red head, brown wings and a white belly. Figure 6: Top: visualisation of feature channels at stage 3. The number at the top-right corner is the channel number, and the word that has the highest correlation αi,j in Eq. 1 with the channel is shown under the image. Bottom: spatial attention produced in stage 3. channels of feature maps that correlate with different semantics, and our channel-wise attention module assigns large correlation values to channels that are semantically related to the word describing parts of a bird. This phenomenon is further verified by the ablation study shown in Fig. 7 (left side). Without channel-wise attention, our model fails to generate controllable results when we modify the text related to parts of a bird. In contrast, our model with channel-wise attention can generate better controllable results. Effectiveness of word-level discriminator. To verify the effectiveness of the word-level discriminator, we first conduct an ablation study: our model is trained without word-level discriminator, shown in Fig. 7 (right side), and then we construct a baseline model by replacing our discriminator with This yellow bird has grey and white wings and a red head. This yellow bird has grey and white wings and a red belly. The bird is small and round with white belly and blue wings. The bird is small and round with white head and blue wings. Ours without channel-wise attention This bird s wing bar is brown and yellow, and its belly is yellow. This bird s wing bar is brown and yellow, and its belly is white. A small bird with a brown colouring and white belly. A small bird with a brown colouring and white head. Ours without word-level discriminator Figure 7: Left: ablation study of channel-wise attention; right: ablation study of the word-level discriminator. The bird is small with a pointed bill, has black eyes, and a yellow crown. The bird is small with a pointed bill, has black eyes, and an orange crown. A bird with a white belly and metallic blue wings with a small beak. A bird with a white head and metallic blue wings with a small beak. Ours without perceptual loss A songbird is white with blue feathers and black eyes. A songbird is yellow with blue and green feathers and black eyes. A tiny bird, with green flank, white belly, yellow crown, and a pointy bill. A tiny bird, with green flank, grey belly, blue crown, and a pointy bill. Ours with text-adaptive discriminator Figure 8: Left: ablation study of the perceptual loss [7]; right: comparison between our word-level discriminator and text-adaptive discriminator [12]. a text-adaptive discriminator [12], which also explores the correlation between image features and words. Visual comparisons are shown in Fig. 8 (right side). We can easily observe that the compared baseline fails to manipulate the synthetic images. For example, as shown in the first two columns, the bird generated from the modified text has a totally different shape, and the background has been changed as well. This is due to the fact that the text-adaptive discriminator [12] uses a global pooling layer to extract image features, which may lose important spatial information. Effectiveness of perceptual loss. Furthermore, we conduct an ablation study: our model is trained without the perceptual loss, shown in Fig. 8 (left side). Without perceptual loss, images generated from modified text are hard to preserve content that are related to unmodified text, which indicates that the perceptual loss can potentially introduce a stricter semantic constraint on the image generation and help reduce the involved randomness. 5 Conclusion We have proposed a controllable generative adversarial network (Control GAN), which can generate and manipulate the generation of images based on natural language descriptions. Our Control GAN can successfully disentangle different visual attributes and allow parts of the synthetic image to be manipulated accurately, while preserving the generation of other content. Three novel components are introduced in our model: 1) the word-level spatial and channel-wise attention-driven generator can effectively disentangle different visual attributes, 2) the word-level discriminator provides the generator with fine-grained training signals related to each visual attribute, and 3) the adoption of perceptual loss reduces the randomness involved in the generation, and enforces the generator to reconstruct content related to unmodified text. Extensive experimental results demonstrate the effectiveness and superiority of our method on two benchmark datasets. Acknowledgements. This work was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, the AXA Research Fund, the ERC grant ERC-2012-Ad G 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1 and EPSRC/MURI grant EP/N019474/1. We would also like to acknowledge the Royal Academy of Engineering and Five AI. [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ar Xiv preprint ar Xiv:1409.0473, 2014. [2] A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Neural photo editing with introspective adversarial networks. ar Xiv preprint ar Xiv:1609.07093, 2016. [3] M.-M. Cheng, S. Zheng, W.-Y. Lin, V. Vineet, P. Sturgess, N. Crook, N. J. Mitra, and P. Torr. Imagespirit: Verbal guided image parsing. ACM Transactions on Graphics (TOG), 34(1):3, 2014. [4] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pages 1486 1494, 2015. [5] H. Dong, S. Yu, C. Wu, and Y. Guo. Semantic image synthesis via adversarial learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 5706 5714, 2017. [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672 2680, 2014. [7] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of European Conference on Computer Vision, pages 694 711. Springer, 2016. [8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. [9] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581 3589, 2014. [10] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In Proceedings of European Conference on Computer Vision, pages 740 755. Springer, 2014. [11] E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov. Generating images from captions with attention. ar Xiv preprint ar Xiv:1511.02793, 2015. [12] S. Nam, Y. Kim, and S. J. Kim. Text-adaptive generative adversarial networks: manipulating images with natural language. In Advances in Neural Information Processing Systems, pages 42 51, 2018. [13] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4467 4477, 2017. [14] A. Oliva, A. Torralba, M. S. Castelhano, and J. M. Henderson. Top-down control of visual attention in object detection. In Proceedings of International Conference on Image Processing (Cat. No. 03CH37429), volume 1, pages 253 256. IEEE, 2003. [15] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ar Xiv preprint ar Xiv:1511.06434, 2015. [16] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. ar Xiv preprint ar Xiv:1605.05396, 2016. [17] S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In Advances in Neural Information Processing Systems, pages 217 225, 2016. [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211 252, 2015. [19] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234 2242, 2016. [20] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673 2681, 1997. [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint ar Xiv:1409.1556, 2014. [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1 9, 2015. [23] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-Ucsd Birds-200-2011 dataset. 2011. [24] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048 2057, 2015. [25] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He. Attn GAN: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1316 1324, 2018. [26] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21 29, 2016. [27] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stack GAN: Text to photorealistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5907 5915, 2017. [28] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stack GAN++: Realistic image synthesis with stacked generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):1947 1962, 2018. [29] X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang. Progressive attention guided recurrent network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 714 722, 2018. [30] Z. Zhang, Y. Xie, F. Xing, M. Mc Gough, and L. Yang. MDNet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6428 6436, 2017. [31] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In Proceedings of European Conference on Computer Vision, pages 597 613. Springer, 2016.