# clone_mcmc_parallel_highdimensional_gaussian_gibbs_sampling__8c747d87.pdf Clone MCMC: Parallel High-Dimensional Gaussian Gibbs Sampling Andrei-Cristian B arbos IMS Laboratory Univ. Bordeaux - CNRS - BINP andbarbos@u-bordeaux.fr François Caron Department of Statistics University of Oxford caron@stats.ox.ac.uk Jean-François Giovannelli IMS Laboratory Univ. Bordeaux - CNRS - BINP giova@ims-bordeaux.fr Arnaud Doucet Department of Statistics University of Oxford doucet@stats.ox.ac.uk We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution. Similarly to Hogwild methods, our approach does not target the original Gaussian distribution of interest, but an approximation to it. Contrary to Hogwild methods, a single parameter allows us to trade bias for variance. We show empirically that our method is very flexible and performs well compared to Hogwild-type algorithms. 1 Introduction Sampling high-dimensional distributions is notoriously difficult in the presence of strong dependence between the different components. The Gibbs sampler proposes a simple and generic approach, but may be slow to converge, due to its sequential nature. A number of recent papers have advocated the use of so-called "Hogwild Gibbs samplers", which perform conditional updates in parallel, without synchronizing the outputs. Although the corresponding algorithms do not target the correct distribution, this class of methods has shown to give interesting empirical results, in particular for Latent Dirichlet Allocation models [1, 2] and Gaussian distributions [3]. In this paper, we focus on the simulation of high-dimensional Gaussian distributions. In numerous applications, such as computer vision, satellite imagery, medical imaging, tomography or weather forecasting, simulation of high-dimensional Gaussians is needed for prediction, or as part of a Markov chain Monte Carlo (MCMC) algorithm. For example, [4] simulate high dimensional Gaussian random fields for prediction of hydrological and meteorological quantities. For posterior inference via MCMC in a hierarchical Bayesian model, elementary blocks of a Gibbs sampler often require to simulate high-dimensional Gaussian variables. In image processing, the typical number of variables (pixels/voxels) is of the order of 106/109. Due to this large size, Cholesky factorization is not applicable; see for example [5] or [6]. In [7, 8] the sampling problem is recast as an optimisation one: a sample is obtained by minimising a perturbed quadratic criterion. The cost of the algorithm depends on the choice of the optimisation technique. Exact resolution is prohibitively expensive so an iterative solver with a truncated number of iterations is typically used [5] and the distribution of the samples one obtains is unknown. In this paper, we propose an alternative class of iterative algorithms for approximately sampling high-dimensional Gaussian distributions. The class of algorithms we propose borrows ideas from optimization and linear solvers. Similarly to Hogwild algorithms, our sampler does not target the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. distribution of interest but an approximation to this distribution. A single scalar parameter allows us to tune both the error and the convergence rate of the Markov chain, allowing to trade variance for bias. We show empirically that the method is very flexible and performs well compared to Hogwild algorithms. Its performance are illustrated on a large-scale image inpainting-deconvolution application. The rest of the article is organized as follows. In Section 2, we review the matrix splitting techniques that have been used to propose novel algorithms to sample high-dimensional normals. In Section 3, we present our novel methodology. Section 4 provides the intuition for such a scheme, which we refer to as clone MCMC, and discusses some generalization of the idea to non-Gaussian target distributions. We compare empirically Hogwild and our methodology on a variety of simulated examples in Section 5. The application to image inpainting-deconvolution is developed in Section 6. 2 Background on matrix splitting and Hogwild Gaussian sampling We consider a d-dimensional Gaussian random variable X with mean µ and positive definite covariance matrix Σ. The probability density function of X, evaluated at x = (x1 . . . , xd)T, is 2 (x µ)T Σ 1 (x µ) exp 1 2x TJ x + h Tx where J = Σ 1 is the precision matrix and h = Jµ the potential vector. Typically, the pair (h, J) is available, and the objective is to estimate (µ, Σ) or to simulate from π. For moderate-size or sparse precision matrices, the standard method for exact simulation from π is based on the Cholesky decomposition of Σ, which has computational complexity O(d3) in the most general case [9]. If d is very large, the cost of Cholesky decomposition becomes prohibitive and iterative methods are favoured due to their smaller cost per iteration and low memory requirements. A principled iterative approach to draw samples approximately distributed from π is the single-site Gibbs sampler, which simulates a Markov chain (X(i))i=1,2,... with stationary distribution π by updating each variable in turn from its conditional distribution. A complete update of the d variables can be written in matrix form as X(i+1) = (D + L) 1LTX(i) + (D + L) 1Z(i+1), Z(i+1) N(h, D) (1) where D is the diagonal part of J and L is is the strictly lower triangular part of J. Equation (1) highlights the connection between the Gibbs sampler and linear iterative solvers as E[X(i+1)|X(i) = x] = (D + L) 1LTx + (D + L) 1h is the expression of the Gauss-Seidel linear iterative solver update to solve the system Jµ = h for a given pair (h, J). The single-site Gaussian Gibbs sampler can therefore be interpreted as a stochastic version of the Gauss-Seidel linear solver. This connection has been noted by [10] and [11], and later exploited by [3] to analyse the Hogwild Gibbs sampler and by [6] to derive a family of Gaussian Gibbs samplers. The Gauss-Seidel iterative solver is just a particular example of a larger class of matrix splitting solvers [12]. In general, consider the linear system Jµ = h and the matrix splitting J = M N, where M is invertible. Gauss-Seidel corresponds to setting M = D + L and N = LT. More generally, [6] established that the Markov chain with transition X(i+1) = M 1NX(i) + M 1Z(i+1), Z(i+1) N(h, M T + N) (2) admits π as stationary distribution if and only if the associated iterative solver with update x(i+1) = M 1Nx(i) + M 1h is convergent; that is if and only if ρ(M 1N) < 1, where ρ denotes the spectral radius. Using this result, [6] built on the large literature on linear iterative solvers in order to derive generalized Gibbs samplers with the correct Gaussian target distribution, extending the approaches proposed by [10, 11, 13]. The practicality of the iterative samplers with transition (2) and matrix splitting (M, N) depends on How easy it is to solve the system Mx = r for any r, How easy it is to sample from N(0, M T + N). As noted by [6], there is a necessary trade-off here. The Jacobi splitting M = D would lead to a simple solution to the linear system, but sampling from a Gaussian distribution with covariance matrix M T + N would be as complicated as solving the original sampling problem. The Gauss-Seidel splitting M = D + L provides an interesting trade-off as Mx = r can be solved by substitution and M T + N = D is diagonal. The method of successive over-relaxation (SOR) uses a splitting M = ω 1D + L with an additional tuning parameter ω > 0. In both the SOR and Gauss-Seidel cases, the system Mx = r can be solved by substitution in O(d2), but the resolution of the linear system cannot be parallelized. All the methods discussed so far asymptotically sample from the correct target distribution. The Hogwild Gaussian Gibbs sampler does not, but its properties can also be analyzed using techniques from the linear iterative solver literature as demonstrated by [3]. For simplicity of exposure, we focus here on the Hogwild sampler with blocks of size 1. In this case, the Hogwild algorithm simulates a Markov chain with transition X(i+1) = M 1 Hog NHog X(i) + M 1 Hog Z(i+1), Z(i+1) N(h, MHog) where MHog = D and NHog = (L + LT). This update is highly amenable to parallelization as MHog is diagonal thus one can easily solve the system MHogx = r and sample from N(0, MHog). [3] showed that if ρ(M 1 Hog NHog) < 1, the Markov chain admits N(µ, eΣ) as stationary distribution where eΣ = (I + M 1 Hog NHog) 1Σ. The above approach can be generalized to blocks of larger sizes. However, beyond the block size, the Hogwild sampler does not have any tunable parameter allowing us to modify its incorrect stationary distribution. Depending on the computational budget, we may want to trade bias for variance. In the next Section, we describe our approach, which offers such flexibility. 3 High-dimensional Gaussian sampling Let J = M N be a matrix splitting, with M positive semi-definite. Consider the Markov chain (X(i))i=1,2,... with initial state X(0) and transition X(i+1) = M 1NX(i) + M 1Z(i+1), Z(i+1) N(h, 2M). (3) The following theorem shows that, if the corresponding iterative solver converges, the Markov chain converges to a Gaussian distribution with the correct mean and an approximate covariance matrix. Theorem 1. If ρ(M 1N) < 1, the Markov chain (X(i))i=1,2,... defined by (3) has stationary distribution N(µ, eΣ) where eΣ = 2 I + M 1N 1 Σ 2M 1Σ 1) 1Σ. Proof. The equivalence between the convergence of the iterative linear solvers and their stochastic counterparts was established in [6, Theorem 1]. The mean eµ of the stationary distribution verifies the recurrence eµ = M 1N eµ + M 1Σ 1µ (I M 1N)eµ = M 1Σ 1µ eµ = µ as Σ 1 = M N. For the covariance matrix, consider the 2d-dimensional random variable Y1 Y2 , M/2 N/2 N/2 M/2 Then using standard manipulations of multivariate Gaussians and the inversion lemma on block matrices we obtain Y1|Y2 N(M 1NY2 + M 1h, 2M 1) Y2|Y1 N(M 1NY1 + M 1h, 2M 1) Y1 N(µ, eΣ), Y2 N(µ, eΣ) The above proof is not constructive, and we give in Section 4 the intuition behind the choice of the transition and the name clone MCMC. We will focus here on the following matrix splitting M = D + 2ηI, N = 2ηI L LT (5) where η 0. Under this matrix splitting, M is a diagonal matrix and an iteration only involves a matrix-vector multiplication of computational cost O(d2). This operation can be easily parallelized. Each update has thus the same computational complexity as the Hogwild algorithm. We have 2(D + 2ηI) 1Σ 1) 1Σ. Since M 1 0 and M 1N I for η , we have lim η eΣ = Σ, lim η ρ(M 1N) = 1. The parameter η is an easily interpretable tuning parameter for the method: as η increases, the stationary distribution of the Markov chain becomes closer to the target distribution, but the samples become more correlated. For example, consider the target precision matrix J = Σ 1 with Jii = 1, Jij = 1/(d + 1) for i = j and d = 1000. The proposed sampler is run for different values of η in order to estimate the covariance matrix Σ. Let ˆΣ = 1/ns Pns i=1(X(i) ˆµ)T(X(i) ˆµ) be the estimated covariance matrix where ˆµ = 1/ns Pns i=1 X(i) is the estimated mean. The Figure 1(a) reports the bias term ||Σ eΣ||, the variance term ||bΣ eΣ|| and the overall error ||Σ bΣ|| as a function of η, using ns = 10000 samples and 100 replications, with || || the ℓ2 (Frobenius) norm. As η increases, the bias term decreases while the variance term increases, yielding an optimal value at η 10. Figure 1(b-c) show the estimation error for the mean and covariance matrix as a function of η, for different sample sizes. Figure 2 shows the estimation error as a function of the sample size for different values of η. The following theorem gives a sufficient condition for the Markov chain to converge for any value η. Theorem 2. Let M = D + 2ηI, N = 2ηI L LT. A sufficient condition for ρ(M 1N) < 1 for all η 0 is that Σ 1 is strictly diagonally dominant. Proof. M is non singular, hence det(M 1N λI) = 0 det(N λM) = 0. Σ 1 = M N is diagonally dominant, hence λM N = (λ 1)M + M N is also diagonally dominant for any λ 1. From Gershgorin s theorem, a diagonally dominant matrix is nonsingular, so det(N λM) = 0 for all λ 1. We conclude that ρ(M 1N) < 1. (a) ns = 20000 (b) ||Σ bΣ|| (c) ||µ bµ|| Figure 1: Influence of the tuning parameter η on the estimation error (a) ||Σ bΣ|| (b) ||µ bµ|| Figure 2: Influence of the sample size on the estimation error 4 Clone MCMC We now provide some intuition on the construction given in Section 3, and justify the name given to the method. The joint pdf of (Y1, Y2) on R2d defined in (4) with matrix splitting (5) can be expressed as eπη(y1, y2) exp{ η 2(y1 y2)T(y1 y2)} 4(y1 µ)TD(y1 µ) 1 4(y1 µ)T(L + LT)(y2 µ)} 4(y2 µ)TD(y2 µ) 1 4(y2 µ)T(L + LT)(y1 µ)} We can interpret the joint pdf above as having cloned the original random variable X into two dependent random variables Y1 and Y2. The parameter η tunes the correlation between the two variables, and eπη(y1|y2) = Qd k=1 eπη(y1k|y2), which allows for straightforward parallelization of the method. As η , the clones become more and more correlated, with corr(Y1, Y2) 1 and eπη(y1) π(y1). The idea can be generalized further to pairwise Markov random fields. Consider the target distribution 1 i j d ψij(xi, xj) for some potential functions ψij, 1 i j d. The clone pdf is eπ(y1, y2) exp{ η 2(y1 y2)T(y1 y2) 1 1 i j d (ψij(y1i, y2i) + ψij(y2i, y1i))} eπ(y1|y2) = k=1 eπ(y1k|y2). Assuming eπ is a proper pdf, we have eπ(y1) π(y1) as η . Figure 3: Estimation error for the covariance matrix Σ1 for fixed computation time, d = 1000. Figure 4: Estimation error for the covariance matrix Σ2 for fixed computation time, d = 1000. 5 Comparison with Hogwild and Gibbs sampling In this section, we provide an empirical comparison of the proposed approach with the Gibbs sampler and Hogwild algorithm, using the splitting (5). Note that in order to provide a fair comparison between the algorithms, we only consider the single-site Gibbs sampling and block-1 Hogwild algorithms, whose updates are respectively given in Equations (1) and (2). Versions of all three algorithms could also be developed with blocks of larger sizes. We consider the following two precision matrices. 1 α α 1 + α2 α ... ... ... α 1 + α2 α α 1 ... ... ... ... ... 0.15 0.3 1 0.3 0.15 ... ... ... ... ... where for the first precision matrix we have α = 0.95. Experiments are run on GPU with 2688 CUDA cores. In order to compare the algorithms, we run each algorithm for a fixed execution time (10s, 80s and 120s). Computation time per iteration for Hogwild and Clone MCMC are similar, and they return a similar number of samples. The computation time per iteration of the Gibbs sampling is much higher, due to the lack of parallelization, and it returns less samples. For Hogwild and Clone MCMC, we report both the approximation error ||Σ eΣ|| and the estimation error ||Σ bΣ||. For Gibbs, only the estimation error is reported. Figures 3 and 4 show that, for a range of values of η, our method outperforms both Hogwild and Gibbs, whatever the execution time. As the computational budget increases, the optimal value for η increases. 6 Application to image inpainting-deconvolution In order to demonstrate the usefulness of the approach, we consider an application to image inpaintingdeconvolution. Let Y = THX + B, B N(0, Σb) (6) (a) True image (b) Observed Image (c) Posterior mean (optimization) (d) Posterior mean (clone MCMC) Figure 5: Deconvolution-Interpolation results be the observation model where Y Rn is the observed image, X Rd is the true image, B Rn is the noise component, H Rd d is the convolution matrix and T Rn d is the truncation matrix. The observation noise is assumed to be independent of X with Σ 1 b = γb I and γb = 10 2. Assume with Σ 1 x = γ01d1T d + γ1CCT wherein 1d is a column vector of size d having all elements equal to 1/d, C is the block-Toeplitz convolution matrix corresponding to the 2D Laplacian filter and γ0 = γ1 = 10 2. The objective is to sample from the posterior distribution X|Y = y N(µx|y, Σx|y) Σ 1 x|y = HTT TΣ 1 b TH + Σ 1 x µx|y = Σx|y HTT TΣ 1 b y. The true unobserved image is of size 1000 1000, hence the posterior distribution corresponds to a random variable of size d = 106. We have considered that 20% of the pixels are not observed. The true image is given in Figure 5(a); the observed image is given in Figure 5(b). In this high-dimensional setting with d = 106, direct sampling via Cholesky decomposition or standard single-site Gibbs algorithm are not applicable. We have implemented the block-1 Hogwild algorithm. However, in this scenario the algorithm diverges, which is certainly due to the fact that the spectral radius of M 1 Hog NHog is greater than 1. We run our clone MCMC algorithm for ns = 19000 samples, out of which the first 4000 were discarded as burn-in samples, using as initialization the observed image, with missing entries padded with zero. The tuning parameter η is set to 1. Figure 5(c) contains the reconstructed image that was obtained by numerically maximizing the posterior distribution using gradient ascent. We shall take this image as reference when evaluating the reconstructed image computed as the posterior mean from the drawn samples. The reconstructed image is given in Figure 5(d). If we compare the restored image with the one obtained by the optimization approach we can immediately see that the two images are visually very similar. This observation is further reinforced by the top plot from Figure 6 where we have depicted the same line of pixels from both images. The line of pixels that is displayed is indicated by the blue line segments in Figure 5(d). The traces in grey represent the 99% credible intervals. We can see that for most of the pixels, if not for all for that matter, the estimated value lies well within the 99% credible intervals. The bottom plot from Figure 6 displays the estimated image together with the true image for the same line of pixels, showing an accurate estimation of the true image. Figure 7 shows traces of the Markov chains for 4 selected pixels. Their exact position is indicated in Figure 5(b). The red marker corresponds to an observed pixel from a region having a mid-grey tone. The green marker corresponds to an observed pixel from a white tone region. The dark blue marker corresponds to an observed pixel from dark tone region. Figure 6: Line of pixels from the restored image Figure 7: Markov chains for selected pixels, clone MCMC The cyan marker corresponds to an observed pixel from a region having a tone between mid-grey and white. The choice of η can be a sensible issue for the practical implementation of the algorithm. We observed empirically convergence of our algorithm for any value η greater than 0.075. This is a clear advantage over Hogwild, as our approach is applicable in settings where Hogwild is not as it diverges, and offers an interesting way of controlling the bias/variance trade-off. We plan to investigate methods to automatically choose the tuning parameter η in future work. [1] D. Newman, P. Smyth, M. Welling, and A. Asuncion. Distributed inference for latent Dirichlet allocation. In Advances in neural information processing systems, pages 1081 1088, 2008. [2] R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011. [3] M. Johnson, J. Saunderson, and A. Willsky. Analyzing Hogwild parallel Gaussian Gibbs sampling. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2715 2723. Curran Associates, Inc., 2013. [4] Y. Gel, A. E. Raftery, T. Gneiting, C. Tebaldi, D. Nychka, W. Briggs, M. S. Roulston, and V. J. Berrocal. Calibrated probabilistic mesoscale weather field forecasting: The geostatistical output perturbation method. Journal of the American Statistical Association, 99(467):575 590, 2004. [5] C. Gilavert, S. Moussaoui, and J. Idier. Efficient Gaussian sampling for solving large-scale inverse problems using MCMC. Signal Processing, IEEE Transactions on, 63(1):70 80, January 2015. [6] C. Fox and A. Parker. Accelerated Gibbs sampling of normal distributions using matrix splittings and polynomials. Bernoulli, 23(4B):3711 3743, 2017. [7] G. Papandreou and A. L. Yuille. Gaussian sampling by local perturbations. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1858 1866. Curran Associates, Inc., 2010. [8] F. Orieux, O. Féron, and J. F. Giovannelli. Sampling high-dimensional Gaussian distributions for general linear inverse problems. IEEE Signal Processing Letters, 19(5):251 254, 2012. [9] H. Rue. Fast sampling of Gaussian Markov random fields. Journal of the Royal Statistical Society: Series B, 63(2):325 338, 2001. [10] S.L. Adler. Over-relaxation method for the Monte Carlo evaluation of the partition function for multiquadratic actions. Physical Review D, 23(12):2901, 1981. [11] P. Barone and A. Frigessi. Improving stochastic relaxation for Gaussian random fields. Probability in the Engineering and Informational sciences, 4(03):369 389, 1990. [12] G. Golub and C. Van Loan. Matrix Computations. The John Hopkins University Press, Baltimore, Maryland 21218-4363, Fourth edition, 2013. [13] G.O. Roberts and S.K. Sahu. Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. Journal of the Royal Statistical Society: Series B, 59(2):291 317, 1997.