# flipout_efficient_pseudoindependent_weight_perturbations_on_minibatches__9a79ec12.pdf Published as a conference paper at ICLR 2018 FLIPOUT: EFFICIENT PSEUDO-INDEPENDENT WEIGHT PERTURBATIONS ON MINI-BATCHES Yeming Wen, Paul Vicol, Jimmy Ba University of Toronto Vector Institute wenyemin,pvicol,jba@cs.toronto.edu Dustin Tran Columbia University Google trandustin@google.com Roger Grosse University of Toronto Vector Institute rgrosse@cs.toronto.ca Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services. 1 INTRODUCTION Stochasticity is a key component of many modern neural net architectures and training algorithms. The most widely used regularization methods are based on randomly perturbing a network s computations (Srivastava et al., 2014; Ioffe & Szegedy, 2015). Bayesian neural nets can be trained with variational inference by perturbing the weights (Graves, 2011; Blundell et al., 2015). Weight noise was found to aid exploration in reinforcement learning (Plappert et al., 2017; Fortunato et al., 2017). Evolution strategies (ES) minimizes a black-box objective by evaluating many weight perturbations in parallel, with impressive performance on robotic control tasks (Salimans et al., 2017). Some methods perturb a network s activations (Srivastava et al., 2014; Ioffe & Szegedy, 2015), while others perturb its weights (Graves, 2011; Blundell et al., 2015; Plappert et al., 2017; Fortunato et al., 2017; Salimans et al., 2017). Stochastic weights are appealing in the context of regularization or exploration because they can be viewed as a form of posterior uncertainty about the parameters. However, compared with stochastic activations, they have a serious drawback: because a network typically has many more weights than units, it is very expensive to compute and store separate weight perturbations for every example in a mini-batch. Therefore, stochastic weight methods are typically done with a single sample per mini-batch. In contrast, activations are easy to sample independently for different training examples within a mini-batch. This allows the training algorithm to see orders of magnitude more perturbations in a given amount of time, and the variance of the stochastic gradients decays as 1/N, where N is the mini-batch size. We believe this is the main reason stochastic activations are far more prevalent than stochastic weights for neural net regularization. In other settings such as Bayesian neural nets and evolution strategies, one is forced to use weight perturbations and live with the resulting inefficiency. Published as a conference paper at ICLR 2018 In order to achieve the ideal 1/N variance reduction, the gradients within a mini-batch need not be independent, but merely uncorrelated. In this paper, we present flipout, an efficient method for decorrelating the gradients between different examples without biasing the gradient estimates. Flipout applies to any perturbation distribution that factorizes by weight and is symmetric around 0 including Drop Connect, multiplicative Gaussian perturbations, evolution strategies, and variational Bayesian neural nets and to many architectures, including fully connected nets, convolutional nets, and RNNs. In Section 3, we show that flipout gives unbiased stochastic gradients, and discuss its efficient vectorized implementation which incurs only a factor-of-2 computational overhead compared with shared perturbations. We then analyze the asymptotics of gradient variance with and without flipout, demonstrating strictly reduced variance. In Section 4, we measure the variance reduction effects on a variety of architectures. Empirically, flipout gives the ideal 1/N variance reduction in all architectures we have investigated, just as if the perturbations were done fully independently for each training example. We demonstrate speedups in training time in a large batch regime. We also use flipout to regularize the recurrent connections in LSTMs, and show that it outperforms methods based on dropout. Finally, we use flipout to vectorize evolution strategies (Salimans et al., 2017), allowing a single GPU to handle the same throughput as 40 CPU cores using existing approaches; this corresponds to a factor-of-4 cost reduction on Amazon Web Services. 2 BACKGROUND 2.1 WEIGHT PERTURBATIONS We use the term weight perturbation to refer to a class of methods which sample the weights of a neural network stochastically at training time. More precisely, let f(x, W) denote the output of a network with weights W on input x. The weights are sampled from a distribution qθ parameterized by θ. We aim to minimize the expected loss E(x,y) D,W qθ[L(f(x, W), y)], where L is a loss function, and D denotes the data distribution. The distribution qθ can often be described in terms of perturbations: W = W + W, where W are the mean weights (typically represented explicitly as part of θ) and W is a stochastic perturbation. We now give some specific examples of weight perturbations. Gaussian perturbations. If the entries Wij are sampled independently from Gaussian distributions with variance σ2 ij, this corresponds to the distribution Wij N(W ij, σ2 ij). Using the reparameterization trick (Kingma & Welling, 2014), this can be rewritten as Wij = W ij +σijϵij, where ϵij N(0, 1); this representation allows the gradients to be computed using backprop. A variant of this is multiplicative Gaussian perturbation, where the perturbations are scaled according to the weights: Wij N(W ij, σ2 ij W 2 ij), or Wij = W ij(1 + σijϵij), where again ϵij N(0, 1). Multiplicative perturbations can be more effective than additive ones because the information content of the weights is the same regardless of their scale. Drop Connect. Drop Connect (Wan et al., 2013) is a regularization method inspired by dropout (Srivastava et al., 2014) which randomly zeros out a random subset of the weights. In the case of a 50% drop rate, this can be thought of as a weight perturbation where W = W/2 and each entry Wij is sampled uniformly from W ij. Variational Bayesian neural nets. Rather than fitting a point estimate of a neural net s weights, one can adopt the Bayesian approach of putting a prior distribution p(W) over the weights and approximating the posterior distribution p(W|D) p(W)p(D|W), where D denotes the observed data. Graves (2011) observed that one could fit an approximation qθ(W) p(W|D) using variational inference; in particular, one could maximize the evidence lower bound (ELBO) with respect to θ: F(θ) = E W qθ [log p(D | W)] DKL(qθ p). The negation of the second term can be viewed as the description length of the data, and the negation of the first term can be viewed as the description length of the weights (Hinton & Van Camp, 1993). Graves (2011) observed that if q is chosen to be a factorial Gaussian, sampling from θ can be thought of as Gaussian weight perturbation where the variance is adapted to maximize F. Blundell et al. Published as a conference paper at ICLR 2018 (2015) later combined this insight with the reparameterization trick (Kingma & Welling, 2014) to derive unbiased stochastic estimates of the gradient of F. Evolution strategies. ES (Rechenberg & Eigen, 1973) is a family of black box optimization algorithms which use weight perturbations to search for model parameters. ES was recently proposed as an alternative reinforcement learning algorithm (Schmidhuber et al., 2007; Salimans et al., 2017). In each iteration, ES generates a collection of weight perturbations as candidates and evaluates each according to a fitness function F. The gradient of the parameters can be estimated from the fitness function evaluations. ES is highly parallelizable, because perturbations can be generated and evaluated independently by different workers. Suppose M is the number of workers, W is the model parameter, σ is the standard deviation of the perturbations, α is the learning rate, F is the objective function, and Wm is the Gaussian noise generated at worker m. The ES algorithm tries to maximize E W F W + σ W . The gradient of the objective function and the update rule can be given as: F(W + W) = 1 WF(W + W) , where W N(0, σI) = W t+1 = W t + α 1 Mσ2 m=1 F(W t + Wm) Wm (1) 2.2 LOCAL REPARAMETERIZATION TRICK In some cases, it s possible to reformulate weight perturbations as activation perturbations, thereby allowing them to be efficiently computed fully independently for different examples in a mini-batch. In particular, Kingma et al. (2015) showed that for fully connected networks with no weight sharing, unbiased stochastic gradients could be computed without explicit weight perturbations using the local reparameterization trick (LRT). For example, suppose X is the input mini-batch, W is the weight matrix and B = XW is the matrix of activations. The LRT samples the activations B rather than the weights W. In the case of a Gaussian posterior, the LRT is given by: qθ(Wi,j) = N(µi,j, σ2 i,j) Wi,j W = qθ(bm,j|X) = N(γm,j, δm,j) i=1 xm,iµi,j, and δm,j = X i=1 x2 m,iσ2 i,j, (2) where bm,j denotes the perturbed activations. While the exact LRT applies only to fully connected networks with no weight sharing, Kingma et al. (2015) also introduced variational dropout, a regularization method inspired by the LRT which performs well empirically even for architectures the LRT does not apply to. 2.3 OTHER RELATED WORK Control variates are another general class of strategies for variance reduction, both for black-box optimization (Williams, 1992; Ranganath et al., 2014; Mnih & Gregor, 2014) and for gradient-based optimization (Roeder et al., 2016; Miller et al., 2017; Louizos et al., 2017). Control variates are complementary to flipout, so one could potentially combine these techniques to achieve a larger variance reduction. We also note that the fastfood transform (Le et al., 2013) is based on similar mathematical techniques. However, whereas fastfood is used to approximately multiply by a large Gaussian matrix, flipout preserves the random matrix s distribution and instead decorrelates the gradients between different samples. As described above, weight perturbation algorithms suffer from high variance of the gradient estimates because all training examples in a mini-batch share the same perturbation. More precisely, sharing the perturbation induces correlations between the gradients, implying that the variance can t be eliminated by averaging. In this section, we introduce flipout, an efficient way to perturb the weights quasi-independently within a mini-batch. Published as a conference paper at ICLR 2018 3.1 FLIPOUT We make two assumptions about the weight distribution qθ: (1) the perturbations of different weights are independent; and (2) the perturbation distribution is symmetric around zero. These are nontrivial constraints, but they encompass important use cases: independent Gaussian perturbations (e.g. as used in variational BNNs and ES) and Drop Connect with drop probability 0.5. We observe that, under these assumptions, the perturbation distribution is invariant to elementwise multiplication by a random sign matrix (i.e. a matrix whose entries are 1). In the following, we denote elementwise multiplication by . Observation 1. Let qθ be a perturbation distribution that satisfies the above assumptions, and let d W qθ. Let E be a random sign matrix that is independent of d W. Then W = d W E is identically distributed to d W. Furthermore, the loss gradients computed using W are identically distributed to those computed using d W. Flipout exploits this fact by using a base perturbation d W shared by all examples in the mini-batch, and multiplies it by a different rank-one sign matrix for each example: Wn = d W rns n , (3) where the subscript denotes the index within the mini-batch, and rn and sn are random vectors whose entries are sampled uniformly from 1. According to Observation 1, the marginal distribution over gradients computed for individual training examples will be identical to the distribution computed using shared weight perturbations. Consequently, flipout yields an unbiased estimator for the loss gradients. However, by decorrelating the gradients between different training examples, we can achieve much lower variance updates when averaging over a mini-batch. Vectorization. The advantage of flipout over explicit perturbations is that computations on a minibatch can be written in terms of matrix multiplications. This enables efficient implementations on GPUs and modern accelerators such as the Tensor Processing Unit (TPU) (Jouppi et al., 2017). Let x denote the activations in one layer of a neural net. The next layer s activations are given by: yn = φ W xn W + d W rns n xn W xn + d W (xn sn) rn , where φ denotes the activation function. To vectorize these computations, we define matrices R and S whose rows correspond to the random sign vectors rn and sn for all examples in the mini-batch. The above equation is vectorized as: Y = φ XW + (X S) d W R . (4) This defines the forward pass. Because R and S are sampled independently of W and d W, we can backpropagate through Eqn. 4 to obtain derivatives with respect to W, d W, and X. Computational cost. In general, the most expensive operation in the forward pass is matrix multiplication. Flipout s forward pass requires two matrix multiplications instead of one, and therefore should be roughly twice as expensive as a forward pass with a single shared perturbation when the multiplications are done in sequence.1 However, note that the two matrix multiplications are independent and can be done in parallel; this incurs the same overhead as the local reparameterization trick (Kingma et al., 2015). A general rule of thumb for neural nets is that the backward pass requires roughly twice as many FLOPs as the forward pass. This suggests that each update using flipout ought to be about twice as expensive as an update with a single shared perturbation (if the matrix multiplications are done sequentially); this is consistent with our experience. 1Depending on the efficiency of the underlying libraries, the overhead of sampling R and S may be nonnegligible. If this is an issue, these matrices may be reused between all mini-batches. In our experience, this does not cause any drop in performance. Published as a conference paper at ICLR 2018 Evolution strategies. ES is a highly parallelizable algorithm; however, most ES systems are engineered to run on multi-core CPU machines and are not able to take full advantage of GPU parallelism. Flipout enables ES to run more efficiently on a GPU because it allows each worker to evaluate a batch of quasi-independent perturbations rather than only a single perturbation. To apply flipout to ES, we can simply replicate the starting state by the number of flipout perturbations N, at each worker. Instead of Eqn. 1, the update rule using M workers becomes: W t+1 = W t + α 1 MNσ2 n=1 Fmn n d W m rmns mn o (5) where m indexes workers, n indexes the examples in a worker s batch, and Fmn is the reward evaluated with the nth perturbation at worker m. Hence, each worker is able to evaluate multiple perturbations as a batch, allowing for parallelism on a GPU architecture. 3.2 VARIANCE ANALYSIS In this section, we analyze the variance of stochastic gradients with and without flipout. We show that flipout is guaranteed to reduce the variance of the gradient estimates compared to using na ıve shared perturbations. Let Gx = G(x, W) = θi L(y, f(x, W, W)) denote one entry of the stochastic gradient θL(y, f(x, W, W)) under the perturbation W for a single training example x. (Note that Gx is a random variable which depends on both x and W. We analyze a single entry of the gradient so that we can work with scalar-valued variances.) We denote the gradient averaged over a mini-batch as the random variable GB = 1 N PN n=1 G(xn, Wn), where B = {xn}N n=1 denotes a mini-batch of size N, and Wn denotes the perturbation for the nth example. (The randomness comes from both the choice of B and the random perturbations.) For simplicity, we assume that the xn are sampled i.i.d. from the data distribution. Using the Law of Total Variance, we decompose Var(GB) into a data term (the variance of the exact mini-batch gradients) and an estimation term (the estimation variance for a fixed mini-batch): Var (GB) = Var B | {z } data h Var W GB | B i | {z } estimation Notice that the data term decays with N while the estimation term may not, due to its dependence on the shared perturbation. But we can break the estimation term into two parts for which we can analyze the dependence on N. To do this, we reformulate the standard shared perturbation scheme as follows: W is generated by first sampling d W and then multiplying it by a random sign matrix rs as in Eqn. 3 exactly like flipout, except that the sign matrix is shared by the whole minibatch. According to Observation 1, this yields an identical distribution for W to the standard shared perturbation scheme. Based on this, we obtain the following decomposition: Theorem 2 (Variance Decomposition Theorem). Define α, β, and γ to be Gx | x + E x h Var W(Gx | x) i (7) β = E x,x , d W h Cov W (Gx, Gx | x, x , d W) i (8) E W[Gx | x, d W], E W[Gx | x , d W] | x, x (9) Under the assumptions of Observation 1, the variance of the gradients under shared perturbations and flipout perturbations can be written in terms of α, β, and γ as follows: Fully independent perturbations: Var(GB) = 1 Shared perturbation: Var(GB) = 1 N (β + γ) (11) Flipout: Var(GB) = 1 Published as a conference paper at ICLR 2018 Proof. Details of the proof are provided in Appendix A. We can interpret α, β, and γ as follows. First, α combines the data term from Eqn. 6 with the expected estimation variance for individual data points. This corresponds to the variance of the gradients on individual training examples, so fully independent perturbations yield a total variance of α/N. The other terms, β and γ, reflect the covariance between the estimation errors on different training examples as a result of the shared perturbations. The term β reflects the covariance that results from sampling r and s, so it is eliminated by flipout, which samples these vectors independently. Finally, γ reflects the covariance that results from sampling d W, which flipout does not eliminate. Empirically, for all the neural networks we investigated, we found that α β γ. This implies the following behavior for Var(GB) as a function of N: for small N, the data term α/N dominates, giving a 1/N variance reduction; with shared perturbations, once N is large enough that α/N < β, the variance Var(GB) levels off to β. However, flipout continues to enjoy a 1/N variance reduction in this regime. In principle, flipout s variance should level off at the point where α/N < γ, but in all of our experiments, γ was small enough that this never occurred: flipout s variance was approximately α/N throughout the entire range of N values we explored, just as if the perturbations were sampled fully independently for every training example. 4 EXPERIMENTS We first verified empirically the variance reduction effect of flipout predicted by Theorem 2; we measured the variance of the gradients under different perturbations for a wide variety of neural network architectures and batch sizes. In Section 4.2, we show that flipout applied to Gaussian perturbations and Drop Connect is effective at regularizing LSTM networks. In Section 4.3, we demonstrate that flipout converges faster than shared perturbations when training with large minibatches. Finally, in Section 4.4 we present experiments combining Evolution Strategies with flipout in both supervised learning and reinforcement learning tasks. In our experiments, we consider the four architectures shown in Table 1 (details in Appendix B). 4.1 VARIANCE REDUCTION Since the main effect of flipout is intended to be variance reduction of the gradients, we first estimated the gradient variances of several architectures with mini-batch sizes ranging from 1 to 8196 (Fig. 1). We experimented with three perturbation methods: a single shared perturbation per minibatch, the local reparameterization trick (LRT) of Kingma et al. (2015), and flipout. For each of the FC, Con VGG, and LSTM architectures, we froze a partially trained network to use for all variance estimates, and we used multiplicative Gaussian perturbations with σ2 = 1. We computed Monte Carlo estimates of the gradient variance, including both the data and estimation terms in Eqn. 6. Confidence intervals are based on 50 independent runs of the estimator. Details are given in Appendix C. The analysis in Section 3.2 makes strong predictions about the shapes of the curves in Fig. 1. By Theorem 2, the variance curves for flipout and shared perturbations each have the form a + b/N, where N is the mini-batch size. On a log-log plot, this functional form appears as a linear regime with slope -1, a constant regime, and a smooth phase transition in between. Also, because the distribution of individual gradients is identical with and without flipout, the curves must agree for N = 1. Name Network Type Data Set Conv Le (Shallow) Convolutional MNIST (Le Cun et al., 1998) Con VGG (Deep) Convolutional CIFAR-10 (Krizhevsky & Hinton, 2009) FC Fully Connected MNIST LSTM LSTM Network Penn Treebank (Marcus et al., 1993) Table 1: Network Configurations Published as a conference paper at ICLR 2018 100 101 102 103 104 Batch size Variance Estimation (a) Fully-connected Net (FC) 100 101 102 103 104 Batch size Variance Estimation Conv1 Conv8 (b) Convolutional Net (con VGG) 101 102 103 104 Batch Size Variance Estimation Wf Wi Wo Wc Figure 1: Empirical variance of gradients with respect to mini-batch size for several architectures. (a) FC on MNIST; FC1 denotes the first layer of the FC network. (b) Con VGG on CIFAR-10; Conv1 denotes the first convolutional layer. (c) LSTM on Penn Treebank; the variance is shown for the hidden-to-hidden weight matrices in the first LSTM layer: Wf, Wi, Wo, and Wc are the weights for the forget, input and output gates, and the candidate cell update, respectively. Dotted: shared perturbations. Solid: flipout. Dashed: LRT. Our plots are consistent with both of these predictions. We observe that for shared perturbations, the phase transition consistently occurs for mini-batch sizes somewhere between 100 and 1000. In contrast, flipout gives the ideal linear variance reduction throughout the range of mini-batch sizes we investigated, i.e., its behavior is indistinguishable from fully independent perturbations. As analyzed by Kingma et al. (2015), the LRT gradients are fully independent within a mini-batch, and are therefore guaranteed to achieve the ideal 1/N variance reduction. Furthermore, they reduce the variance below that of explicit weight perturbations, so we would expect them to achieve smaller variance than flipout, as shown in Fig. 1a. However, flipout is applicable to a wider variety of architectures, including convolutional nets and RNNs. 4.2 REGULARIZATION FOR LANGUAGE MODELING We evaluated the regularization effect of flipout on the character-level and word-level language modeling tasks with the Penn Treebank corpus (PTB) (Marcus et al., 1993). We compared flipout to several other methods for regularizing RNNs: na ıve dropout (Zaremba et al., 2014), variational dropout (Gal & Ghahramani, 2016), recurrent dropout (Semeniuta et al., 2016), zoneout (Krueger et al., 2016), and Drop Connect (Merity et al., 2017). Zaremba et al. (2014) apply dropout only to the feed-forward connections of an RNN (to the input, output, and connections between layers). The other methods regularize the recurrent connections as well: Semeniuta et al. (2016) apply dropout to the cell update vector, with masks sampled either per step or per sequence; Gal & Ghahramani (2016) apply dropout to the forward and recurrent connections, with all dropout masks sampled per sequence. Merity et al. (2017) use Drop Connect to regularize the hidden-to-hidden weight matrices, with a single Drop Connect mask shared between examples in a mini-batch. We denote their model WD (for weight-dropped LSTM). Character-Level. For our character-level experiments, we used a single-layer LSTM with 1000 hidden units. We trained each model on non-overlapping sequences of 100 characters in batches of size 32, using the AMSGrad variant of Adam (Reddi et al., 2018) with learning rate 0.002. We perform early stopping based on validation performance. Here, we applied flipout to the hidden-tohidden weight matrix. More hyperparameter details are given in Appendix D. The results, measured in bits-per-character (BPC) for the validation and test sequences of PTB, are shown in Table 2. In the table, shared perturbations and flipout (with Gaussian noise sampling) are denoted by Mult. Gauss and Mult. Gauss + Flipout, respectively. We also compare to RBN (recurrent batchnorm) (Cooijmans et al., 2017) and H-LSTM+LN (Hyper LSTM + Layer Norm) (Ha et al., 2016). Mult. Gauss + Flipout outperforms the other methods, and achieves the best reported results for this architecture. Word-Level. For our word-level experiments, we used a 2-layer LSTM with 650 hidden units per layer and 650-dimensional word embeddings. We trained on sequences of length 35 in batches of size 40, for 100 epochs. We used SGD with initial learning rate 30, and decayed the learning rate by a factor of 4 based on the nonmonotonic criterion introduced by Merity et al. (2017). We used flipout to implement Drop Connect, as described in Section 2.1, and call this WD+Flipout. We applied WD+Flipout to the hidden-to-hidden weight matrices for recurrent regularization, and used the same hyperparameters as Merity et al. (2017). We used embedding dropout (setting rows of the embedding matrix to 0) with probability 0.1 for all regularized models except Gal, where we used Published as a conference paper at ICLR 2018 Model Valid Test Unregularized LSTM 1.468 1.423 Semeniuta (2016) 1.337 1.300 Zoneout (2016) 1.306 1.270 Gal (2016) 1.277 1.245 Mult. Gauss (σ = 1) (ours) 1.257 1.230 Mult. Gauss + Flipout (ours) 1.256 1.227 RBN (2017) 1.32 H-LSTM + LN (2016) 1.281 1.250 Table 2: Bits-per-character (BPC) for the characterlevel PTB task. The RBN and H-LSTM+LN results are from the respective papers. All other results are from our own experiments. Model Valid Test Unregularized LSTM 132.23 128.97 Zaremba (2014) 80.40 76.81 Semeniuta (2016) 81.91 77.88 Gal (2016) 78.24 75.39 Zoneout (2016) 78.66 75.45 WD (2017) 78.82 75.71 WD + Flipout (ours) 76.88 73.20 Table 3: Perplexity on the PTB word-level validation and test sets. All results are from our own experiments. probability 0.2 as specified in their paper. More hyperparameter details are given in Appendix D. We show in Table 3 that WD+Flipout outperforms the other methods with respect to both validation and test perplexity. In Appendix E.4, we show that WD+Flipout yields significant variance reduction for large mini-batches, and that when training with batches of size 8192, it converges faster than WD. 4.3 LARGE BATCH TRAINING WITH FLIPOUT Theorem 2 and Fig. 1 suggest that the variance reduction effect of flipout is more pronounced in the large mini-batch regime. In this section, we train a Bayesian neural network with mini-batches of size 8192 and show that flipout speeds up training in terms of the number of iterations. We trained the FC and Conv Le networks from Section 4.1 using Bayes by Backprop (Blundell et al., 2015). Since our primary focus is optimization, we focus on the training loss, shown in Fig. 2a: for FC, we compare flipout with shared perturbations and the LRT; for Conv Le, we compare only to shared perturbations since the LRT does not give an unbiased gradient estimator. We found that flipout converged in about 3 times fewer iterations than shared perturbations for both models, while achieving comparable performance to the LRT for the FC model. Because flipout is roughly twice as expensive as shared perturbations (see Section 3.1), this corresponds to a 1.5x speedup overall. Curves for the training and test error are given in Appendix E.2. 4.4 EVOLUTION STRATEGIES ES typically runs on multiple CPU cores. The challenge in making ES GPU-friendly is that each sample requires computing a separate weight perturbation, so traditionally each worker can only generate one sample at a time. In Section 3.1, we showed that ES with flipout allows each worker to evaluate a batch of perturbations, which can be done efficiently on a GPU. However, flipout induces correlations between the samples, so we investigated whether these correlations cause a slowdown in training relative to fully independent perturbations (which we term Ideal ES ). In this section, we show empirically that flipout ES is just as sample-efficient as Ideal ES, and consequently one can obtain significantly higher throughput per unit cost using flipout ES on a GPU. The ES gradient defined in Eqn. 1 has high variance, so a large number of samples are generally needed before applying an update. We found that 5,000 samples are needed to achieve stable performance in the supervised learning tasks. Standard ES runs the forward pass 5,000 times with independent weight perturbations, which sees little benefit to using a GPU over a CPU. Flip ES allows the same number of samples to be evaluated using a much smaller number of explicit perturbations. Throughout the experiments, we ran flipout with mini-batches of size 40 (i.e. N = 40 in Eqn. 5). We compared Ideal ES and Flip ES with a fully connected network (FC) on the MNIST dataset. Fig. 2b shows that we incur no loss in performance when using pseudo-independent noise. Next, we compared Flip ES and cpu ES (using 40 CPU cores) in terms of the per-update time with respect to the model size. The result (in Appendix E.3) shows that Flip ES scales better because it runs on the GPU. Finally, we compared Flip ES and the backpropagation algorithm on both FC and Conv Le. Fig. 2c and Fig. 2d show that Flip ES achieves data efficiency comparable with the backpropagation algorithm. Ideal ES has a much higher computational cost than backpropagation, due to the large number of forward passes. Flip ES narrows the computational gap between them. Although ES is Published as a conference paper at ICLR 2018 0 5000 10000 15000 20000 25000 30000 2.0 Train Loss (FC) LRT Non Flip Flip 1000 2000 3000 4000 5000 6000 Iterations 20 Train Loss (Conv) Non Flip Flip (a) Large Batch Training w/ Bayes by Backprop 0 2000 4000 6000 8000 10000 12000 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Train Error Ideal ES Flip ES 0 2000 4000 6000 8000 10000 12000 Iterations 0.03 0.04 0.05 0.06 0.07 0.08 Validation Error Ideal ES Flip ES (b) Flipout vs. Fully Independent Perturbations 0 2500 5000 7500 10000 12500 15000 17500 20000 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Train Error Flip ES(5000) Flip ES(1600) 0 2500 5000 7500 10000 12500 15000 17500 20000 Iterations 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Validation Error (c) Backprop vs. Flip ES (FC) 0 500 1000 1500 2000 2500 3000 3500 4000 0.00 0.05 Train Error Backprop Flip ES(5000) 0 500 1000 1500 2000 2500 3000 3500 4000 Iterations 0.05 Validation Error Flip ES(1600) (d) Backprop vs. Flip ES (Conv Le) Figure 2: Large batch training and ES. a) Training loss per iteration using Bayes By Backprop with batch size 8192 on the FC and Conv Le networks. b) Error rate of the FC network on MNIST using ES with 1,600 samples per update; there is no drop in performance compared to ideal ES. c) Error rate of FC on MNIST, comparing Flip ES (with either 5,000 or 1,600 samples per update) with backpropagation. (This figure does not imply that Flip ES is more efficient than backprop; Flip ES was around 60 times more expensive than backprop per update.) d) The same as (c), except run on Conv Le. more expensive than backpropagation, it can be applied to models which are not fully differentiable, such as models with a discrete loss (e.g., accuracy or BLEU score) or with stochastic units. 5 CONCLUSIONS We have introduced flipout, an efficient method for decorrelating the weight gradients between different examples in a mini-batch. We showed that flipout is guaranteed to reduce the variance compared with shared perturbations. Empirically, we demonstrated significant variance reduction in the large batch setting for a variety of network architectures, as well as significant speedups in training time. We showed that flipout outperforms dropout-based methods for regularizing LSTMs. Flipout also makes it practical to apply GPUs to evolution strategies, resulting in substantially increased throughput for a given computational cost. We believe flipout will make weight perturbations practical in the large batch setting favored by modern accelerators such as Tensor Processing Units (Jouppi et al., 2017). ACKNOWLEDGMENTS YW was supported by an NSERC USRA award, and PV was supported by a Connaught New Researcher Award. We thank David Duvenaud, Alex Graves, Geoffrey Hinton, and Matthew D. Hoffman for helpful discussions. Published as a conference paper at ICLR 2018 Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 1613 1622, 2015. Tim Cooijmans, Nicolas Ballas, C esar Laurent, C a glar G ulc ehre, and Aaron Courville. Recurrent batch normalization. In International Conference on Learning Representations (ICLR), 2017. Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, et al. Noisy networks for exploration. ar Xiv preprint ar Xiv:1706.10295, 2017. Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 1019 1027, 2016. Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 2348 2356, 2011. David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. ar Xiv preprint ar Xiv:1609.09106, 2016. Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the 6th Annual Conference on Computational Learning Theory, pp. 5 13. ACM, 1993. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448 456, 2015. Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon Mac Kean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, and Jonathan Ross. In-datacenter performance analysis of a tensor processing unit. 2017. URL https://arxiv. org/pdf/1704.04760.pdf. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014. Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems (NIPS), 2015. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. In Technical Report. University of Toronto, 2009. David Krueger, Tegan Maharaj, J anos Kram ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Chris Pal. Zoneout: Regularizing RNNs by randomly preserving hidden activations. Co RR, abs/1606.01305, 2016. Quoc Le, Tam as Sarl os, and Alex Smola. Fastfood-approximating kernel expansions in loglinear time. In Proceedings of the International Conference on Machine Learning (ICLR), 2013. Yann Le Cun, L eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. ar Xiv preprint ar Xiv:1705.08665, 2017. Published as a conference paper at ICLR 2018 Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313 330, 1993. Stephen Merity, Nitish S Keskar, and Richard Socher. Regularizing and optimizing LSTM language models. ar Xiv preprint ar Xiv:1708.02182, 2017. Andrew C Miller, Nicholas J Foti, Alexander D Amour, and Ryan P Adams. Reducing reparameterization gradient variance. ar Xiv preprint ar Xiv:1705.07880, 2017. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. ar Xiv preprint ar Xiv:1402.0030, 2014. Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. ar Xiv preprint ar Xiv:1706.01905, 2017. Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In Artificial Intelligence and Statistics (AISTATS), pp. 814 822, 2014. Ingo Rechenberg and Manfred Eigen. Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Friedrich Frommann Verlag, Stuttgart-Bad Cannstatt, 1973. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations (ICLR), 2018. Geoffrey Roeder, Yuhuai Wu, and David Duvenaud. Sticking the landing: A simple, reducedvariance gradient estimator for variational inference. In Advances in Approximate Bayesian Inference Workshop (NIPS), 2016. Tim Salimans, Jonathan Ho, Xi Chen, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. ar Xiv preprint ar Xiv:1703.03864, 2017. J urgen Schmidhuber, Daan Wierstra, Matteo Gagliolo, and Faustino Gomez. Training recurrent networks by evolino. Neural Computation, 19(3):757 779, 2007. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pp. 1757 1766, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint ar Xiv:1409.1556, 2014. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929 1958, 2014. Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using Drop Connect. In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 1058 1066, 2013. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229 256, 1992. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. ar Xiv preprint ar Xiv:1409.2329, 2014. Published as a conference paper at ICLR 2018 A PROOF OF THEOREM 2 In this section, we provide the proof of Theorem 2 (Variance Decomposition Theorem). Proof. We use the notations from Section 3.2. Let x, x denote two training examples from the mini-batch B, and W, W denote the weight perturbations they received. We begin with the decomposition into data and estimation terms (Eqn. 6), which we repeat here for convenience: Var (GB) = Var B | {z } data h Var W GB | B i | {z } estimation The data term from Eqn. 13 can be simplified: GB | B = Var B n=1 Gxn B i! Gx | x (14) We break the estimation term from Eqn. 13 into variance and covariance terms: h Var W GB | B i = E B = 1 N 2 E B n =1 Cov Wn, Wn Gxn, Gxn | xn, xn # = 1 N 2 E B n=1 Var Wn(Gxn | xn) + X n =n Cov Wn, Wn Gxn, Gxn | xn, xn h Var W(Gx | x) i + N 1 Cov W, W (Gx, Gx | x, x ) (15) We now separately analyze the cases of fully independent perturbations, shared perturbations, and flipout. Fully independent perturbations. If the perturbations are fully independent, the second term in Eqn. 15 disappears. Hence, combining Eqns. 13, 14, and 15, we are left with Var(GB) = 1 h Var W(Gx | x) i , (16) which is just α/N. Shared perturbations. Recall that we reformulate the shared perturbations in terms of first sampling d W, and then letting W = d W rs , where r and s are random sign vectors shared by the whole batch. Using the Law of Total Variance, we break the second term in Eqn. 15 into a part that comes from sampling d W and a part that comes from sampling r and s. Cov W, W (Gx, Gx | x, x ) = E d W Cov W, W (Gx, Gx | x, x , d W) x, x + E W[Gx | x, d W], E W [Gx | x , d W] x, x (17) Published as a conference paper at ICLR 2018 Since the perturbations are shared, W = W, so this can be simplified slightly to: h Cov W (Gx, Gx | x, x , d W) x, x i + Cov d W E W[Gx | x, d W], E W[Gx | x , d W] x, x (18) Plugging these two terms into the second term of Eqn. 15 yields N 1 N (β + γ), so putting this all together we get Var(GB) = 1 Flipout. Since the perturbations for different examples are independent conditioned on d W, the first term of Eqn. 17 vanishes. However, the second term remains. Therefore, plugging this into Eqn. 15 and combining the result with Eqns. 13 and 14, we are left with Var(GB) = 1 B NETWORK CONFIGURATIONS Here, we provide details of the network configurations used for our experiments (Section 4). The FC network is a 3-layer fully-connected network with 512-512-10 hidden units. Conv Le is a Le Net-like network (Le Cun et al., 1998) where the first two layers are convolutional with 32 and 64 filters of size [5, 5], and use Re LU non-linearities. A 2 2 max pooling layer follows after each convolutional layer. Dimensionality reduction only takes place in the pooling layer; the stride for pooling is two and padding is used in the convolutional layers to keep the dimension. Two fully-connected layers with 1024 and 10 hidden units are used to produce the classification result. Con VGG is based on the VGG16 network (Simonyan & Zisserman, 2014). We modified the last fully connected layer to have 10 output dimensions for our experiments on CIFAR-10. We didn t use batch normalization for the variance reduction experiment since it introduces extra stochasticity. The architectures used for the LSTM experiments are described in Section 4.2. The hyperparameters used for the language modelling experiments are provided in Appendix D. C VARIANCE REDUCTION EXPERIMENT DETAILS Given a network architecture, we compute the empirical stochastic gradient update variance as follows. We start with a moderately pre-trained model, such as a network with 85% training accuracy on MNIST. Without updating the parameters, we obtain the gradients of all the weights by performing a feed-forward pass, that includes sampling d W, R, and S, followed by backpropagation. The gradient variance of each weight is computed by repeating this procedure 200 times in the experiments. Let ] Varlj denote the estimate of the gradient variance of weight j in layer l. We compute the gradient variance as follows: ] Varlj = 1 200 i=1 (gi lj glj)2 where glj = 1 200 where gi lj is the gradient received by weight j in layer l. We estimate the variance of the gradients in layer l by averaging the variances of the weights in that layer, V = 1 |J| P j ] Varlj. In order to compute a confidence interval on the gradient variance estimate, we repeat the above procedure 50 times, yielding a sequence of average variance estimates, f V1, . . . , f V50. For Fig. 1, we compute the 90% confidence intervals of the variance estimates with a t-test. For Con VGG, multiple GPUs were needed to run the variance reduction experiment with large mini-batch sizes (such as 4096 and 8192). In such cases, it is computationally efficient to generate independent weight perturbations on different GPUs. However, since our aim was to understand the effects of variance reduction independent of implementation, we shared the base perturbation among all GPUs to produce the plot shown in Fig. 1. We show in Appendix E that flipout yields lower variance even when we sample independent perturbations on different GPUs. For the LSTM variance reduction experiments, we used the two-layer LSTM described in Section 4.2, trained for 3 epochs on the word-level Penn Treebank dataset. For Fig. 1, we split large Published as a conference paper at ICLR 2018 mini-batches (size 128 and higher) into sub-batches of size 64; we sampled one base perturbation W that was shared among all sub-batches, and we sampled independent R and S matrices for each sub-batch. D LSTM REGULARIZATION EXPERIMENT DETAILS Long Short-Term Memory networks (LSTMs) are defined by the following equations: it, ft, ot = σ(Whht 1 + Wxxt + b) (19) gt = tanh(Wght 1 + Ugxt + bg) (20) ct = ft ct 1 + it gt (21) ht = ot tanh(ct) (22) where it, ft, and ot are the input, forget, and output gates, respectively, gt is the candidate update, and denotes elementwise multiplication. Na ıve application of dropout on the hidden state of an LSTM is not effective, because it leads to significant memory loss over long sequences. Several approaches have been proposed to regularize the recurrent connections, based on applying dropout to specific terms in the LSTM equations. Semeniuta et al. (2016) propose to drop the cell update vector, with a dropout mask dt sampled either per-step or per-sequence: ct = ft ct 1+it (dt gt). Gal & Ghahramani (2016) apply dropout to the input and hidden state at each time step, xt dx and ht 1 dh, with dropout masks dx and dh sampled once per sequence (and repeated in each time step). Krueger et al. (2016) propose to zone out units rather than dropping them; the hidden state and cell values are either stochastically updated or maintain their previous value: ct = dc t ct 1 + (1 dc t) (ft ct 1 + it gt) and ht = dh t ht 1 + (1 dh t ) (ot tanh(ft ct 1 + it gt)), with zoneout masks dh t and dc t sampled per step. D.1 HYPERPARAMETER DETAILS For the word-level models (Table 3), we used gradient clipping threshold 0.25 and the following hyperparameters: For Gal & Ghahramani (2016), we used variational dropout with the parameters given in their paper: 0.35 dropout probability on inputs and outputs, 0.2 hidden state dropout, and 0.2 embedding dropout. For Semeniuta et al. (2016), we used 0.1 embedding dropout, 0.5 dropout on inputs and outputs, and 0.3 dropout on cell updates, with per-step mask sampling. For Krueger et al. (2016), we used 0.1 embedding dropout, 0.5 dropout on inputs and outputs, and cell and hidden state zoneout probabilities of 0.25 and 0.025, respectively. For WD (Merity et al., 2017), we used the parameters given in their paper: 0.1 embedding dropout, 0.4 dropout probability on inputs and outputs, and 0.3 dropout probability on the output between layers (the same masks are used for each step of a sequence). We use 0.5 probability for Drop Connect applied to the hidden-to-hidden weight matrices. For WD+Flipout, we used the same parameters as Merity et al. (2017), given above, but we regularized the hidden-to-hidden weight matrices with the variant of flipout described in Section 2.1, which implements Drop Connect with probability 0.5. For the character-level models (Table 2), we used orthogonal initialization for the LSTM weight matrices, gradient clipping threshold 1, and did not use input or output dropout. The input characters were represented as one-hot vectors. We used the following hyperparameters for each model: For recurrent dropout (Semeniuta et al., 2016), we used 0.25 dropout probability on the cell state, and per-step mask sampling. For Zoneout (Krueger et al., 2016), we used 0.5 and 0.05 for the cell and hidden state zoneout probabilities, respectively. For the variational LSTM (Gal & Ghahramani, 2016), we used 0.25 hidden state dropout. For the flipout and shared perturbation LSTMs, we sampled Gaussian noise with σ = 1 for the hidden-to-hidden weight matrix. Published as a conference paper at ICLR 2018 E ADDITIONAL EXPERIMENTS E.1 VARIANCE REDUCTION As discussed in Appendix B, training on multiple GPUs naturally induces independent noise for each sub-batch. Fig. 3 shows that flipout still achieves lower variance than shared perturbations in such cases. When estimating the variance with mini-batch size 8192, running on four GPUs naturally induces four independent noise samples, for each sub-batch of size 2048; this yields lower variance than using a single noise sample. Similarly, for mini-batch size 4096, two independent noise samples are generated on separate GPUs. 100 101 102 103 104 batch size Variance Estimation Conv1 Conv8 Figure 3: Empirical variance of the gradients when training on multiple GPUs. Solid: flipout. Dotted: shared perturbations. E.2 LARGE BATCH TRAINING WITH FLIPOUT Fig. 4 shows the training and test error for the large mini-batch experiments described in Section 4.3. For both FC and Conv Le networks, we used the Adam optimizer with learning rate 0.003. We downscaled the KL term by a factor of 10 to achieve higher accuracy. While Fig. 2a shows that flipout converges faster than shared perturbations, Fig. 4 shows that flipout has the same generalization ability as shared perturbations (the faster convergence doesn t result in overfitting). 0 2000 4000 6000 8000 10000 0.00 0.04 Train Error (FC) Non Flip Flip 0 2000 4000 6000 8000 10000 Iterations 0.04 Test Error (FC) Non Flip Flip 0 1000 2000 3000 4000 5000 6000 0.00 0.04 Train Error (Conv) Non Flip Flip 0 1000 2000 3000 4000 5000 6000 Iterations 0.04 Test Error (Conv) Non Flip Flip Figure 4: Left: The training and test errors obtained by training the FC network on large mini-batches (size 8192) with Bayes by Backprop. Right: The training and test errors obtained with Conv Le in the same setting, with mini-batch size 8192. E.3 FLIPES V.S. CPUES Fig. 5 shows that the computational cost of cpu ES increases as the model size increases, while Flip ES scales better because it runs on the GPU. Published as a conference paper at ICLR 2018 0 250 500 750 1000 1250 1500 1750 2000 hidden units Update time (FC) Flip ES cpu ES 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 model scale Update time (Conv) Flip ES cpu ES # Hidden Units Flip ES cpu ES 32 0.12s 0.51s 128 0.13s 1.22s 512 0.18s 5.10s 2048 1.86s 38.0s # Filters Flip ES cpu ES 0.25 2.3s 16s 0.75 5.48s 46s 1.0 7.12s 67s 1.5 11.77s 132s Figure 5: Per-update time comparison between Flip ES and 40-core cpu ES (5,000 samples) w.r.t. the model size. We scale the FC network by modifying the number of hidden units, and we scale the Conv network by modifying the number of filters (1.0 stands for 32 filters in the first convolutional layer and 64 filters for the second one). E.4 LARGE BATCH LSTM TRAINING The variance reduction offered by flipout allows us to use Drop Connect (Wan et al., 2013) efficiently in a large mini-batch setting. Here, we use flipout to implement Drop Connect as described in Section 2.1, and use it to regularize an LSTM word-level language model. We used the LSTM architecture proposed by Merity et al. (2017), which has 400-dimensional word embedddings and three layers with hidden dimension 1150. Following Merity et al. (2017), we tied the weights of the embedding layer and the decoder layer. Merity et al. (2017) use Drop Connect to regularize the hidden-to-hidden weight matrices, with a single mask shared for all examples in a batch. We used flipout to achieve a different Drop Connect mask per example. We applied WD+Flipout to both the hidden-to-hidden (h2h) and input-to-hidden (i2h) weight matrices, and compared to the model from Merity et al. (2017), which we call WD (for weight-dropped LSTM), with Drop Connect applied to both h2h and i2h. Both models use embedding dropout 0.1, output dropout 0.4, and have Drop Connect probability 0.5 for the i2h and h2h weights. Both models were trained using Adam with learning rate 0.001. Fig. 6 compares the variance of the gradients of the first-layer hidden-to-hidden weights between WD and WD+Flipout, and shows that flipout achieves significant variance reduction for mini-batch sizes larger than 256. Fig. 7 shows the training curves of both models with batch size 8192. We see that WD+Flipout converges faster than WD, and achieves a lower training perplexity, showcasing the optimization benefits of flipout in large mini-batch settings. 102 103 104 Wf Wi Wo Wc Figure 6: The variance reduction offered by flipout compared to the WD model (Merity et al., 2017). Solid lines represent WD+Flipout, while dotted lines represent WD. The variance is shown for the hidden-to-hidden weight matrices in the first layer: Wf, Wi, Wo, and Wc are the weights for the forget, input and output gates, and the candidate cell update, respectively. 0 200 400 600 800 1000 Epoch Train Perplexity WD WD+Flipout Figure 7: Training curves for WD and WD+Flipout, with batch size 8192.