# fast_mcmc_sampling_algorithms_on_polytopes__9908e435.pdf Journal of Machine Learning Research 19 (2018) 1-86 Submitted 3/18; Published 9/18 Fast MCMC Sampling Algorithms on Polytopes Yuansi Chen , yuansi.chen@berkeley.edu Raaz Dwivedi , raaz.rsk@berkeley.edu Martin J. Wainwright , , wainwrig@berkeley.edu Bin Yu , binyu@berkeley.edu Department of Statistics Department of Electrical Engineering and Computer Sciences University of California, Berkeley Voleon Group , Berkeley Editor: Alexander Rakhlin We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope. Both random walks are sampling algorithms derived from interior point methods. The former is based on volumetric-logarithmic barrier introduced by Vaidya whereas the latter uses John s ellipsoids. We show that the Vaidya walk mixes in significantly fewer steps than the logarithmic-barrier based Dikin walk studied in past work. For a polytope in Rd defined by n > d linear constraints, we show that the mixing time from a warm start is bounded as O n0.5d1.5 , compared to the O (nd) mixing time bound for the Dikin walk. The cost of each step of the Vaidya walk is of the same order as the Dikin walk, and at most twice as large in terms of constant pre-factors. For the John walk, we prove an O d2.5 log4(n/d) bound on its mixing time and conjecture that an improved variant of it could achieve a mixing time of O d2 poly-log(n/d) . Additionally, we propose variants of the Vaidya and John walks that mix in polynomial time from a deterministic starting point. The speed-up of the Vaidya walk over the Dikin walk are illustrated in numerical examples. Keywords: MCMC methods, interior point methods, polytopes, sampling from convex sets 1. Introduction Sampling from distributions is a core problem in statistics, probability, operations research, and other areas involving stochastic models (Geman and Geman, 1984; Br emaud, 1991; Ripley, 2009; Hastings, 1970). Sampling algorithms are a prerequisite for applying Monte Carlo methods to order to approximate expectations and other integrals. Recent decades have witnessed great success of Markov Chain Monte Carlo (MCMC) algorithms; for instance, see the handbook by Brooks et al. (2011) and references therein. These methods are based on constructing a Markov chain whose stationary distribution is equal to the target distribution, and then drawing samples by simulating the chain for a certain number of steps. An advantage of MCMC algorithms is that they only require knowledge of the target density up to a proportionality constant. However, the theoretical understanding of MCMC . *Yuansi Chen and Raaz Dwivedi contributed equally to this work. c 2018 Chen, Dwivedi, Wainwright and Yu. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v19/18-158.html. Chen, Dwivedi, Wainwright and Yu algorithms used in practice is far from complete. In particular, a general challenge is to bound the mixing time of a given MCMC algorithm, meaning the number of iterations as a function of the error tolerance δ, problem dimension d and other parameters for the chain to arrive at a distribution within distance δ of the target. In this paper, we study a certain class of MCMC algorithms designed for the problem of drawing samples from the uniform distribution over a polytope. The polytope is specified in the form K := {x Rd | Ax b}, parameterized by the matrix-vector pair (A, b) Rn d Rn. Our goal is to understand the mixing time for obtaining δ-accurate samples, and how it grows as a function of the pair (n, d). The problem of sampling uniformly from a polytope is important in various applications and methodologies. For instance, it underlies various methods for computing randomized approximations to polytope volumes. There is a long line of work on sampling methods being used to obtain randomized approximations to the volumes of polytopes and other convex bodies (see, e.g., Lov asz and Simonovits, 1990; Lawrence, 1991; B elisle et al., 1993; Lov asz, 1999; Cousins and Vempala, 2014). Polytope sampling is also useful in developing fast randomized algorithms for convex optimization (Bertsimas and Vempala, 2004) and sampling contingency tables (Kannan and Narayanan, 2012), as well as in randomized methods for approximately solving mixed integer convex programs (Huang and Mehrotra, 2013, 2015). Sampling from polytopes is also related to simulations of the hard-disk model in statistical physics (Kapfer and Krauth, 2013), as well as to simulations of error events for linear programming in communication (Feldman et al., 2005). Many MCMC algorithms have been studied for sampling from polytopes, and more generally, from convex bodies. Some early examples include the Ball Walk (Lov asz and Simonovits, 1990) and the hit-and-run algorithm (B elisle et al., 1993; Lov asz, 1999), which apply to sampling from general convex bodies. Although these algorithms can be applied to polytopes, they do not exploit any special structure of the problem. In contrast, the Dikin walk introduced by Kannan and Narayanan (2012) is specialized to polytopes, and thus can achieve faster convergence rates than generic algorithms. The Dikin walk was the first sampling algorithm based on a connection to interior point methods for solving linear programs. More specifically, as we discuss in detail below, it constructs proposal distributions based on the standard logarithmic barrier for a polytope. In a later paper, Narayanan (2016) extended the Dikin walk to general convex sets equipped with self-concordant barriers. For a polytope defined by n constraints, Kannan and Narayanan (2012) proved an upper bound on the mixing time of the Dikin walk that scales linearly with n. In many applications, the number of constraints n can be much larger than the number of variables d. For example, we could imagine one using many hyperplane constraints to approximate complicated convex sets such as sphere or ellipsoid. For such problems, linear dependence on the number of constraints is not desirable. Consequently, it is natural to ask if it is possible to design a sampling algorithm whose mixing time scales in a sub-linear manner with the number of constraints. Our main contribution is to investigate and answer this question in affirmative in particular, by designing and analyzing two sampling algorithms with provably faster convergence rates than the the Dikin walk while retaining its advantages over the ball walk and the hit-and-run methods. Fast MCMC Sampling Algorithms on Polytopes Our contributions: We introduce and analyze a new random walk, which we refer to as the Vaidya walk since it is based on the volumetric-logarithmic barrier introduced by Vaidya (1989). We show that for a polytope in Rd defined by n-constraints, the Vaidya walk mixes in O n1/2d3/2 steps, whereas the Dikin walk (Kannan and Narayanan, 2012) has mixing time bounded as O (nd). So the Vaidya walk is better in the regime n d. We also propose the John walk, which is based on the John ellipsoidal algorithm in optimization. We show that the John walk has a mixing time of O d2.5 log4(n/d) and conjecture that a variant of it could achieve O d2 poly-log(n/d) mixing time. We show that when compared to the Dikin walk, the per-iteration computational complexities of the Vaidya walk and the John walk are within a constant factor and a poly-logarithmic in n/d factor respectively. Thus, in the regime n d, the overall upper bound on the complexity of generating an approximately uniform sample follows the order Dikin walk Vaidya walk John walk. The remainder of the paper is organized as follows. In Section 2, we discuss many polynomial-time random walks on convex sets and polytopes, and motivate the starting point for the new random walks. In Section 3, we introduce the new random walks and state bounds on their rates of convergence and provide a sketch of the proof in Section 3.5. We discuss the computational complexity of the different random walks and demonstrate the contrast between the random walks for several illustrative examples in Section 4. We present the proof of the mixing time for the Vaidya walk in Section 5 and defer the analysis of the John walk to the appendix. We conclude with possible extensions of our work in Section 6. Notation: For two sequences aδ and bδ indexed by δ I R, we say that aδ = O (bδ) if there exists a universal constant C > 0 such that aδ Cbδ for all δ I. For a set K Rd, the sets int (K) and Kc denote the interior and complement of K respectively. We denote the boundary of the set K by K. The Euclidean norm of a vector x Rd is denoted by x 2. For any square matrix M, we use det(M) and trace(M) to denote the determinant and the trace of the matrix M respectively. For two distributions P1 and P2 defined on the same probability space (X, B(X)), their total-variation (TV) distance is denoted by P1 P2 TV and is defined as follows P1 P2 TV = sup A B(X) |P1(A) P2(A)| . Furthermore if P1 is absolutely continuous with respect to P2, then the Kullback Leibler divergence from P2 to P1 is defined as KL(P1 P2) = Z 2. Background and problem set-up In this section, we describe general MCMC algorithms and review the rates of convergence of existing random walks on convex sets. After introducing several random walks studied in past work, we introduce the Vaidya and John walks studied in this paper. Chen, Dwivedi, Wainwright and Yu 2.1 Markov chains and mixing Suppose that we are interested in drawing samples from a target distribution π supported on a subset X of Rd. A broad class of methods are based on first constructing a discretetime Markov chain that is irreducible and aperiodic, and whose stationary distribution is equal to π , and then simulating this Markov chain for a certain number of steps k. As we describe below, the number of steps k to be taken is determined by a mixing time analysis. In this paper, we consider the class of Markov chains that are of the Metropolis-Hastings type (Metropolis et al., 1953; Hastings, 1970); see the books by Robert (2004) and Brooks et al. (2011), as well as references therein, for further background. Any such chain is specified by an initial density π0 over the set X, and a proposal function p : X X R+, where p(x, ) is a density function for each x X. At each time, given a current state x X of the chain, the algorithm first proposes a new vector z X by sampling from the proposal density p(x, ). It then accepts z X as the new state of the Markov chain with probability α(x, z) := min 1, π (z)p(z, x) π (x)p(x, z) Otherwise, with probability equal to 1 α(x, z), the chain stays at x. Thus, the overall transition kernel p for the Markov chain is defined by the function q(x, z) := p(x, z)α(x, z) for z = x, and a probability mass at x with weight 1 R X q(x, z)dz. It should be noted that the purpose of the Metropolis-Hastings correction (1) is that ensure that the target distribution π satisfies the detailed balanced condition, meaning that q(y, x)π (x) = q(x, y)π (y) for all x, y X. (2) It is straightforward to verify that the detailed balance condition (2) implies that the target density π is stationary for the Markov chain. Throughout this paper, we analyze the lazy version of the Markov chain, defined as follows: when at state x with probability 1/2 the walk stays at x and with probability 1/2 it makes a transition as per the original random walk. Given that the Markov chains discussed in this paper are also irreducible, the laziness ensures uniqueness of the stationary distribution. Overall, this set-up defines an operator Tp on the space of probability distributions: given an initial distribution µ0 with supp(µ0) supp(π ), it generates a new distribution Tp(µ0), corresponding to the distribution of the chain at the next step. Moreover, for any positive integer k = 1, 2, . . ., the distribution µk of the chain at time k is given by T k p (µ0), where T k p denotes the composition of Tp with itself k times. Furthermore, the transition distribution at any state x is given by Tp(δx) where δx denotes the dirac-delta distribution with unit mass at x. Given our assumptions and set-up, we are guaranteed that limk T k p (µ0) = π that is, if we were to run the chain for an infinite number of steps, then we would draw a sample from the target distribution π . In practice, however, any algorithm will be run only for a finite number of steps, which suffices to ensure only that the distribution from which the Fast MCMC Sampling Algorithms on Polytopes sample has been drawn is close to the target π . In order to quantify the closeness, for a given tolerance parameter δ (0, 1), we define the δ-mixing time as kmix(δ; µ0) := min n k | T k p (µ0) π TV δ o , (3) corresponding to the first time that the chain s distribution is within δ in TV norm of the target distribution, given that it starts with distribution µ0. In the analysis of Markov chains, it is convenient to have a rough measure of the distance between the initial distribution µ0 and the stationary distribution. Warmness is one such measure: For a finite scalar M, the initial distribution µ0 is said to be M-warm with respect to the stationary distribution π if M, (Warm-Start) where the supremum is taken over all measurable sets S. A number of mixing time guarantees from past work (Lov asz, 1999; Vempala, 2005) are stated in terms of this notion of M-warmness, and our results make use of it as well. In particular, we provide bounds on the quantity sup µ0 PM(π ) kmix(δ; µ0), where PM(π ) denotes the set of all distributions that are M-warm with respect to π . Naturally, as the value of M decreases, the task of generating samples from the target distribution gets easier. However, access to a warm-start may not be feasible for many applications and thus deriving bounds on mixing time of the Markov chain from a non warm-start is also desirable. Consequently, we provide modifications of our random walks which mix in polynomial time even from deterministic starting points. 2.2 Sampling from polytopes In this paper, we consider the problem of drawing a sample uniformly from a polytope. Given a full-rank matrix A Rn d with n d, we consider a polytope K in Rd of the form K := x Rd | Ax b , (4) where b Rn is a fixed vector. Since the uniform distribution on the polytope K is the primary target distribution considered in the paper, in the sequel we use π exclusively to denote the uniform distribution on the polytope K. There are various algorithms to sample a vector from the uniform distribution over K, including the ball walk (Lov asz and Simonovits, 1990) and hit-and-run algorithms (Lov asz, 1999). To be clear, these two algorithms apply to the more general problem of sampling from a convex set; Table 1 shows their complexity, when applied to the polytope K, relative to the Vaidya walk analyzed in this paper. Most closely related to our paper is the Dikin walk proposed by Kannan and Narayanan (2012), and a more general random walk on a Riemannian manifold studied by Narayanan (2016). Both of these random walks, as with the Vaidya and John walks, can be viewed as randomized versions of the interior point methods used to solve linear programs, and more generally, convex programs equipped with suitable barrier functions. In order to motivate the form of the Vaidya and John walks proposed in this paper, we begin by discussing the ball walk and then the Dikin walk. For the sake of completeness, we end the section with a brief description another popular sampling algorithm Hit-and-run. Chen, Dwivedi, Wainwright and Yu Ball walk: The ball walk of Lov asz and Simonovits (1990) is simple to describe: when at a point x K, it draws a new point u from a Euclidean ball of radius r > 0 centered at x. Here the radius r is a step size parameter in the algorithm. If the proposed point u belongs to the polytope K, then the walk moves to u; otherwise, the walk stays at x. On the one hand, unlike the walks analyzed in this paper, the ball walk applies to any convex set, but on the other, its mixing time depends on the condition number γK of the set K, given by γK = inf Rin,Rout>0 Rout Rin | B(x, Rin) K B(y, Rout) for some x, y K o . (5) Mixing time of the ball walk has been improved greatly since it was introduced (Kannan et al., 1997, 2006; Lee and Vempala, 2018b). Nonetheless, as shown in Table 1, the mixing time of the ball walk gets slower when the condition of the set is large; for instance, it scales1 as d6 for a set with condition number γK = d2. One approach to tackle bad conditioning is to use rounding as a pre-processing step, where the set is rounded to bring it in a nearisotropic position, i.e., reduce the condition γK to near-constant before sampling from it. Nonetheless, these algorithms are themselves based on several rounds of sampling algorithms and the current best algorithm by Lov asz and Vempala (2006b) puts a convex body into approximately isotropic position, i.e., O ( d) rounding with a running time of O (d4) where we have omitted the dependence on log-factors. If one has more information about the structure of the convex set (and not just oracle access as required by the ball walk), one can potentially exploit it to design fast sampling algorithms which are unaffected by the conditioning of the set thereby reducing the need of the (expensive) pre-processing step. One such algorithm is the Dikin walk for polytopes which we describe next. Dikin walk: The Dikin walk (Kannan and Narayanan, 2012) is similar in spirit to the ball walk, except that it proposes a point drawn uniformly from a state-dependent ellipsoid known as the Dikin ellipsoid (Dikin, 1967; Nesterov and Nemirovskii, 1994). It then applies an accept-reject step to adjust for the difference in the volumes of these ellipsoids at different states. The state-dependent choice of the ellipsoid allows the Dikin walk to adapt to the boundary structure. A key property of the Dikin ellipsoid of unit radius in contrast to the Euclidean ball that underlies the ball walk is that it is always contained within K, as is known from classic results on interior point methods (Nesterov and Nemirovskii, 1994). Furthermore, the Dikin walk is affine invariant, meaning that its behavior does not change under linear transformations of the problem. As a consequence, the Dikin mixing time does not depend on the condition number γK. In a variant of this random walk (Narayanan, 2016), uniform proposals in the ellipsoid are replaced by Gaussian proposals with covariance specified by the ellipsoid, and it is shown that with high probability, the proposal falls within the polytope. The Dikin walk is closely related to the interior point methods for solving linear programs. In order to understand the Vaidya and John walks, it is useful to understand this connection in more detail. Suppose that our goal is to optimize a convex function over the polytope K. A barrier method is based on converting this constrained optimization problem to a sequence of unconstrained ones, in particular by using a barrier to enforce the linear 1. Although, very recently Lee and Vempala (2018b) improved the mixing time of the ball walk for isotropic sets which have γK = O( d) improved from O d3 to O d2.5 . Fast MCMC Sampling Algorithms on Polytopes constraints defining the polytope. Letting a i denote the i-th row vector of matrix A, the logarithmic-barrier for the polytope K given by the function i=1 log(bi a T i x). (6) For each i [n], we define the scalar sx,i := (bi a T i x), and we refer to the vector sx := (sx,1, . . . , sx,n) as the slackness at x. Each step of an interior point algorithm (Boyd and Vandenberghe, 2004) involves (approximately) solving a linear system involving the Hessian of the barrier function, which is given by aia i s2 x,i . (7) In the Dikin walk (Kannan and Narayanan, 2012), given a current iterate x, the algorithm chooses a point uniformly at random from the ellipsoid {u Rd | (u x) Dx(u x) R}, (8) where Dx := 2F(x) is the Hessian of the log barrier function, and R > 0 is a user-defined radius. In an alternative form of the Dikin walk (Narayanan, 2016; Sachdeva and Vishnoi, 2016), the proposal vector u Rd is drawn randomly from a Gaussian centered at x, and with covariance equal to a scaled copy of (Dx) 1. Note that in contrast to the ball walk, the proposal distribution now depends on the current state. Vaidya walk: For the Vaidya walk analyzed in this paper, we instead generate proposals from the ellipsoids defined, for each x int (K), by the positive definite matrix i=1 (σx,i + βV) aia i s2 x,i , where (9a) βV := d/n and σx := a 1 ( 2Fx) 1a1 s2 x,1 , . . . , a n ( 2Fx) 1an The entries of the the vector σx are known as the leverage scores assciated with the matrix 2Fx from equation (7), and are commonly used to measure the importance of rows in a linear system (Mahoney, 2011). The matrix Vx is related to the Hessian of the function x 7 Vx given by Vx := log det 2Fx + βVFx. (10) This particular combination of the volumetric barrier and the logarithmic barrier was introduced by Vaidya (1989) and Vaidya and Atkinson (1993) in the context of interior point methods, hence our name for the resulting random walk. Chen, Dwivedi, Wainwright and Yu John walk: We now describe the John walk. For any vector w Rn, let W := diag(w) denote the diagonal matrix with Wii = wi for each i [n]. Let Sx = diag(sx) denote the slackness matrix at x. It is easy to see that Sx is positive semidefinite for all x K, and strictly positive definite for all x int (K). The (scaled) inverse covariance matrix underlying the John walk is given by i=1 ζx,i aia i s2 x,i , (11) where for each x int (K), the weight vector ζx Rn is obtained by solving the convex program ζx := arg min w Rn αJ log det(A S 1 x W αJS 1 x A) βJ with βJ := d/2n and αJ := 1 1/ log2(1/βJ). Lee and Sidford (2014) proposed the convex program (12) associated with the approximate John weights ζx, with the aim of searching for the best member of a family of volumetric barrier functions. They analyzed the use of the John weights in the context of speeding up interior point methods for solving linear programs; here we consider them for improving the mixing time of a sampling algorithm. The convex program (12) is closely related to the problem of finding the largest ellipsoid at any interior point of the polytope, such that the ellipsoid is contained within the polytope. This problem of finding the largest ellipsoid was first studied by John (1948) who showed that each convex body in Rd contains a unique ellipsoid of maximal volume. The convex program (12) was used by Lee and Sidford (2014) to compute approximate John Ellipsoids for solving linear programs. In a recent work, Gustafson and Narayanan (2018) make use of the exact John ellipsoids and design a polynomial time sampling algorithm for polytopes. See Table 1 for the associated guarantees. Hit-and-run: We conclude with a brief discussion with another popular sampling algorithm: Hit-and-run. It was introduced by Smith (1984) as a sampling algorithm for general distributions and it was later shown to have polynomial mixing time for sampling from convex sets (Lov asz, 1999; Lov asz and Vempala, 2003, 2006a). The algorithm proceeds as follows: when at point x, it firsts draws a random line through x and then samples from the one-dimensional marginal of the target distribution restricted to this line. For uniform sampling from convex sets, the second step simplifies to drawing a uniform point from the line restricted to the convex set. Mixing time bounds for this random walk are summarized in Table 1. 2.3 Mixing time comparisons of walks Table 1 provides a summary of the mixing time bounds and per step complexity and the effective per sample complexity for various random walks, including the Vaidya and John walks analyzed in this paper. In addition to the Ball Walk, Hit-and-Run, Dikin, Vaidya and John walks, we also show scalings for the recently introduced Riemannian Hamiltonian Monte Carlo (RHMC) on polytopes by Lee and Vempala (2016) and the John s walk based on exact John ellipsoids studied by Gustafson and Narayanan (2018). The details of per Fast MCMC Sampling Algorithms on Polytopes iteration cost for the new random walks is discussed in Section 4.1. We now compare and contrast the complexities of these random walks. Unlike the Ball Walk or hit-and-run which are useful for general convex sets, the Dikin, Vaidya, John and RHMC walks are specialized for polytopes. These latter random walks exploit the definition of the polytope in a particular way so that the transition probability from a point x to y does not change under an affine transformation, i.e., T(x, y) = T(Ax, Ay) where T denotes the transition kernel for the random walk. Consequently, the mixing time bounds for these random walks have no dependence on the condition number of the set γK (5). We can see from Table 1, that compared to the Ball walk and hit-and-run, Vaidya walk mixes significantly faster if n dγ2 K. The condition number γK of polytopes with polynomially many faces can not be O(d 1 2 ϵ) for any ϵ > 0 but can be arbitrarily larger, even exponential in dimension d (Kannan and Narayanan, 2012). For such polytopes, Vaidya walk mixes faster as long as n d3 (and even for larger n when γK is large). It takes O( p n/d) fewer steps compared to Dikin walk and thus provides a practical speed up over all range of d. From a warm start, the Riemannian Hamiltonian Monte Carlo on polytopes introduced by Lee and Vempala (2016) has O nd2/3 mixing time, and thus mixes faster (up to constants) compared than the Vaidya walk (respectively the John walk) when the number of constraints n is is bounded as n d5/3 (respectively n d11/6). For larger numbers of constraints, the Vaidya and John walks exhibit faster mixing. More generally, it is clear that the rate of John walk has almost the best order across all the walks for reasonably large values of n d2. Finally, let us compare the (exact) John walk due to Gustafson and Narayanan (2018) with the (approximate) John walk studied in our paper. A notable feature of their random walk is that its mixing time is independent of the number of constraints and the per iteration cost also depends linearly on the number of constraints. Nonetheless, the dependence on d, for both the mixing time (d7) and the per iteration cost (nd4 + d8) is quite poor. In contrast, the per iteration cost for our John walk is nd2 and the mixing time has only a poly-logarithmic dependence on n. 2.4 Visualization of three walks proposal distributions In order to gain intuition about the three interior point based methods namely, the Dikin, Vaidya and John walks it is helpful to discuss how their underlying proposal distributions change as a function of the current point x. All three walks are based on Gaussian proposal distributions with inverse covariance matrices of the general form i=1 wx,i aia i s2 x,i , where wx,i > 0 corresponds to a state-dependent weight associated with the i-th constraint. The Dikin walk uses the weights wx,i = 1; the Vaidya walk uses the weights wx,i = σx,i + βV; and the John walk uses the weights wx,i = ζx,i. For simplicity, we refer to these weights as the Dikin, Vaidya and John weights. The i-th weight characterize the importance of the i-th linear constraint in constructing the inverse covariance matrix. A larger value of Chen, Dwivedi, Wainwright and Yu Random walk kmix(δ; µ0) Iteration cost Per sample cost Ball walk# (Kannan et al., 2006) d2γ2 K nd nd3γ2 K Hit-and-Run (Lov asz and Vempala, 2006a) d2γ2 K nd nd3γ2 K Dikin walk (Kannan and Narayanan, 2012) nd nd2 n2d3 RHMC walk (Lee and Vempala, 2018a) nd2/3 nd2 n2d2.67 John s walk (Gustafson and Narayanan, 2018) d7 nd4 + d8 nd11 + d15 Vaidya walk (this paper) n1/2d3/2 nd2 n1.5d3.5 John walk (this paper) d5/2 log4 2n d nd2 log2 n nd4.5 Improved John walk (this paper) d2 κn,d nd2 log2 n nd4 Table 1. Upper bounds on computational complexity of random walks on the polytope K = {x Rd|Ax b} defined by the matrix-vector pair (A, b) Rn d Rn with a warmstart. For simplicity, here we ignore the logarithmic dependence on the warmness parameter and the tolerance δ. The iteration cost terms of order nd2 arise from linear system solving, using standard and numerically stable algorithms, for n equations in d dimensions; algorithms with best possible theoretical complexity ndω for ω < 1.373 are not numerically stable enough for practical use. #Mixing time of the Ball walk has been improved to O d2γK for near isotropic convex bodies by Lee and Vempala (2018b) during the submission period of this paper. While ball walk, Hit-and-run are affected by the condition number γK of the set, the Dikin and RHMC walks have quadratic dependence on the number of constraints n. John s walk by Gustafson and Narayanan (2018) (based on the exact John ellipsoids) has linear dependence on n but poor dependence on d. In contrast, the Vaidya walk has sub-quadratic dependence on n and significantly better dependence on d. Furthermore, the John walk (based on approximate John s ellipsoids) analyzed in this paper has linear dependence with reasonable dependence on the dimensions d. The mixing time bound for the improved John walk with poly-logarithmic factor κn,d is conjectured. the weight wx,i relative to the total weight Pn i=1 wx,i signifies more importance for the i-th linear constraint for the point x. Figure 1a illustrates the difference in three weights as we move points inside the polytope [ 1, 1]2. When the point x is in the middle of the unit square formed by the four constraints, all walks exhibit equal weight for every constraint. When the point x is closer to the bottomleft boundary, the Vaidya and John weights assign larger weights to the bottom and the left constraints, while the weights for top and right constraints decrease. Note that the total sum of Vaidya weights and that of John weights remains constant independent of the position of the point x. In Figure 1b-2b, we demonstrate that the Vaidya walk and the John walk are better at handling repeated constraints. Note that we can define the square [ 1, 1]2 as x R2 Ax b, A = 1 0 0 1 1 0 0 1 Fast MCMC Sampling Algorithms on Polytopes Dikin Weights Vaidya Weights John Weights (a) Weights for different locations and a fixed number of constraints n. Dikin Weights Vaidya Weights John Weights (b) Effective weights for a fixed location and different number of constraints n Figure 1. Visualization of the weights on the square with repeated constraints Sn/4 for the different random walks. The number mentioned next to the boundary lines denotes the effective weight for the location x (denoted by diamond) for the corresponding constraint. (a) n = 4 is common across rows and x = (0, 0) for the top row, (0.9, 0.9) for the middle and ( 0.9, 0.7) for the bottom row. The Dikin weights are independent of x, the Vaidya and the John weights for a constraint increase if the location x is closer to it. (b) x = (0.85, 0.30) is common across rows, and n=4 for the top row, n = 16 for the middle and n=128 for the bottom row. The effective Dikin weight for each constraint increases linearly with n but for the Vaidya and John walk adaptively, the weights get adjusted such that the sum of their weights is always of the order of the dimension d. Simply repeating the rows of the matrix A several times changes the mathematical formulatiton of the polytope, but does not change the shape of the polytope. We define the square with constraints repeated n/4 times Sn/4 as x R2 An/4x bn/4, An/4 = where A and b were defined above. We denote effective weight for each distinct constraint as the sum of weights corresponding to the same constraint. Using this definition, the effective Dikin weight, which is n/4, is thus affected by the repeating of constraints. Consequently, the Dikin ellipsoid is much smaller for polytopes with repeated constraints. However, the Vaidya and John weights do not change as observed in the Figure 1b. Such a property of these two weights implies that the Vaidya and John ellipsoids are not too small even for very large number of constraints. And we observe such a phenomenon in Figures 2a2b where the repetition of rows in the matrix A leads to very small Dikin ellipsoid but large Vaidya and John ellipsoid. A few other numerical computations also suggest that the Vaidya and John ellipsoids are moder adaptive when compared to Dikin ellipsoids when the Chen, Dwivedi, Wainwright and Yu number of constraints is large. Nonetheless, such a claim is only based on heuristics and is presented simply to provide an intuition that the new ellipsoids are better behaved than Dikin ellipsoids and thereby motivated the design of the new random walks. 1.0 0.5 0.0 0.5 1.0 1.0 Dikin Vaidya John 1.0 0.5 0.0 0.5 1.0 1.0 Dikin Vaidya John (b) n = 2048 Figure 2. Visualization of the proposal distribution on the square with repeated constraints Sn/4 for the different random walks. (a, b) Unit ellipsoids associated with the covariances of the random walks at different states x on the square with repeated constraints Sn/4. Clearly, all these ellipsoids adapt to the boundary but increasing n has a profound impact on the volume of the Dikin ellipsoids and comparatively less impact on the Vaidya and John ellipsoids. 3. Main results With the basic background in place, we now describe the algorithms more precisely and state upper bounds on the mixing time of the Vaidya and John walks. In Section 3.4, we propose a variant of the John walk, known as the improved John walk, and conjecture that it has a better mixing time bound than that of the John walk. 3.1 Vaidya and John walks In this subsection, we formally define the Vaidya and John walks. In Algorithm 1 and Algorithm 2, we summarize the steps of the Vaidya walk and the John walk. Vaidya walk: The Vaidya walk with radius parameter r > 0, denoted by VW(r) for short, is defined by a Gaussian proposal distribution denoted as PV x : given a current state x int (K), it proposes a new point by sampling from the multivariate Gaussian distribution Fast MCMC Sampling Algorithms on Polytopes nd Vx 1 . In analytic terms, the proposal density at x is given by p V x(z) := p Vaidya(r)(x, z) = p nd 2r2 (z x) Vx(z x) As the target distribution for our walk is the uniform distribution on K, the proposal step is followed by an accept-reject step as described in Section 2.1 (equation 1). Thus the overall transition distribution for the walk at state x is defined by a density given by q Vaidya(r)(x, z) = ( min {p V x(z), p V z (x)} , z K and z = x, 0, z / K, and a probability mass at x, given by 1 R z K min {px(z), pz(x)} dz. We use TVaidya(r) to denote the resulting transition operator for the Vaidya walk with parameter r. Algorithm 1: Vaidya Walk with parameter r (VW(r)) Input: Parameter r and x0 int (K) Output: Sequence x1, x2, . . . 1 for i = 0, 1, . . . do 2 With probability 1 2 stay at the current state: xi+1 xi % lazy step 3 With probability 1 2 perform the following update: 4 Proposal step: Draw zi+1 N xi, r2 (nd)1/2 V 1 xi 5 Accept-reject step: 6 if zi+1 / K then xi+1 xi % reject an infeasible proposal 8 compute αi+1 = min 1, pzi+1(xi+1)/pxi+1(zi+1) 9 With probability αi+1 accept the proposal: xi+1 zi+1 10 With probability 1 αi+1 reject the proposal: xi+1 xi John walk: The John walk is similar to the Vaidya walk except that the proposals at state x int (K) are generated from the multivariate Gaussian distribution N x, r2 d3/2 log4 2(2n/d)Jx 1 , where the matrix Jx is defined by equation (11), and r > 0 is a constant. The proposal distribution at x int (K) is denoted as PJ x. The proposal step is then followed by an accept-reject step similarly defined as in the Vaidya walk. We use TJohn(r) to denote the resulting transition operator for the John walk with parameter r. 3.2 Mixing time bounds for warm start We are now ready to state an upper bound on the mixing time of the Vaidya walk. In this and other theorem statements, we use c to denote a universal positive constant. Recall that π denotes the uniform distribution on the polytope K, and, that TVaidya(r) denotes the operator on distributions associated with the Vaidya walk. Chen, Dwivedi, Wainwright and Yu Algorithm 2: John Walk with parameter r (JW(r)) Input: Parameter r and x0 int (K) Output: Sequence x1, x2, . . . 1 for i = 0, 1, . . . do 2 With probability 1 2 stay at the current state: xi+1 xi % lazy step 3 With probability 1 2 perform the following update: 4 Proposal step: Draw zi+1 N xi, r2 d3/2 J 1 xi % this step is different than the Vaidya walk 5 Accept-reject step: 6 if zi+1 / K then xi+1 xi % reject an infeasible proposal 8 compute αi+1 = min 1, pzi+1(xi+1)/pxi+1(zi+1) 9 With probability αi+1 accept the proposal: xi+1 zi+1 10 With probability 1 αi+1 reject the proposal: xi+1 xi Theorem 1 Let µ0 be any distribution that is M-warm with respect to π as defined in equation (Warm-Start). For any δ (0, 1], the Vaidya walk with parameter r V = 10 4 T k Vaidya(r V)(µ0) π TV δ for all k cn1/2d3/2 log The proof of Theorem 1 is provided in Section 5. Theorem 1 precisely quantifies the dependence of mixing time of the Vaidya walk on many parameters of interest such as dimension d, number of constraints n, the error tolerance δ and the warmness M. The specific choice r V = 10 4 is for theoretical purposes; in practice, we find that substantially larger values can be used.2 Our upper bound for the mixing time of the Vaidya walk has O( p n/d) improvement over the current best upper bound for the mixing time of the Dikin walk. In Section 4.1, we show that the per iteration cost for the two walks is of the same order. Since n d for closed polytopes in Rd, the effective cost until convergence (iteration complexity multiplied by number of iterations required) for the Vaidya walk is at least of the same order as of the Dikin walk, and significantly smaller when n d. Comparing the provable mixing time upper bounds, the Vaidya walk has an advantage over the Dikin walk for the problems where the number of constraints is significantly larger than the number of variables involved. Our simulations also confirm this theoretical finding. Let us now state our result for the mixing time of the John walk: 2. A larger than optimal r leads to an undesirable high rejection rate. In practice, we can fine tune r by performing a binary search over the interval [10 4, 1] and keeping track of the rejection rate of the samples during the run of the Markov chain for a given choice of r. A choice of r > 1 is obviously bad because then the Vaidya ellipsoid will have poor overlap with polytopes near the boundary, causing high rejection rate and slow down of the chain. Fast MCMC Sampling Algorithms on Polytopes Theorem 2 Suppose that n exp( d), and let µ0 be any distribution that is M-warm with respect to π . Then for any δ (0, 1], the John walk with parameter r J = 10 5 satisfies T k John(r J)(µ0) π TV δ for all k c d2.5 log4 n The proof of Theorem 2 is provided in Appendix D. Again the specific choice of r J = 10 5 is for theoretical purpose; in practice larger choices are possible. Note that the mixing time bound for the John walk depends only on the number of constraints n via a logarithmic factor, and so is almost independent of n. Consequently, it has a mixing time that is polynomial in d even if the number of constraints n scales exponentially in d. Further, we show in Section 4.1 that the cost to execute one step of the John walk is of the same order as of the Dikin walk up to a poly-logarithmic factor in n. Thus, using John walk, we obtain improved mixing time bounds for the case when n d2. 3.3 Mixing time bounds from deterministic start The mixing time bounds in Theorem 1 and 2 depend on the warmness M of the initial distribution. In some applications, it may not be easy to find an M-warm initial distribution. In such cases, we can consider starting the random walk from a deterministic point x0 int (K) that is not too close to the boundary K. Indeed, such a point can be found using standard optimization methods e.g., using a Phase-I method for Newton s algorithm (see Boyd and Vandenberghe, 2004, Section 11.5.4). Given such a deterministic initialization, our mixing time guarantees depend on the distance of the starting point from the boundary. This dependence involves the following notion of s-centrality: Definition 3 A point x int (K) is called s-central if for any chord ef with end points e, f K passing through x, we have e x 2 / f x 2 s. Assuming that it is started at an s-central point x0, the Dikin walk (Kannan and Narayanan, 2012, algorithm in section 2.1) has a polynomial mixing time. The authors showed that when the walk moves to a new state for the first time, the distribution of the iterate is O ( ns)d -warm with respect to the distribution3 π . Since only constant number of steps is required to get a warm start, for a deterministic start, we can just use the Dikin walk in the beginning to provide a warm start to the Vaidya (or John) walk. This motivates us to define the following hybrid walk. Given an s-central point x0, simulate the Dikin walk until we observe a new state. Note that due to laziness and the accept-reject step, the chain can stay at the starting point for several steps before making the first move a new state. Let k1 denote the (random) number of steps taken to make the first move to a new state. After k1 steps, we run the walk VW(r) with xk1 as the initial point. We call such a walk as s-central Dikin-start-Vaidya-walk with parameter r. Let TDikin denote the transition kernel of the Dikin walk stated above. Then, we have the following mixing time bound for this hybrid walk. 3. Obtaining a warmness result for the Vaidya walk from a deterministic start from a central point is nontrivial and it is quite possible that the warmness does not improve. As a result, we simply invoke the established result for the Dikin walk. Chen, Dwivedi, Wainwright and Yu Corollary 4 Any s-central Dikin-start-Vaidya-walk with parameter r = 10 4 satisfies T k Vaidya(r) T k1 Dikin(δx0) π TV δ for all k cn1/2d5/2 log ns where k1 is a geometric random variable with E [k1] c , and c, c > 0 are universal constants. The mixing rate is logarithmic in ns and has an extra factor of d compared to the bounds in Theorem 1. However, guaranteeing a warm start for a general polytope is hard but obtaining a central point involves only a few steps of optimization. Consequently, the hybrid walk and the guarantees from Corollary 4 come in handy for all such cases. Once again we observe that the upper bounds for mixing time are improved by a factor of O( p n/d) when compared to the Dikin walk from an s-central start (Kannan and Narayanan, 2012; Narayanan, 2016) which had a mixing time of O nd2 . The proof follows immediately from Theorem 1 by Kannan and Narayanan (2012) and Theorem 1 of this paper and is thereby omitted. In a similar fashion, we can provide a polynomial time guarantee for a modified John walk from a deterministic start. We can consider a hybrid random walk that starts at an s-central point, simulates the Dikin walk until it makes the first move to a new state, and from there onwards simulates the John walk. Such a chain would have a mixing time of O d3.5poly-log(n, d, s) . For brevity, we omit a formal statement of this result. 3.4 Conjecture on improved John walk From our analysis, we suspect that it is possible to improve the mixing time bound of O d2.5poly-log(n/d) in Theorem 2 by considering a variant of the John walk. In particular, we conjecture that a random walk with proposal distribution given by N x, r2 d poly-log(n/d)Jx 1 for a suitable choice of r has an O d2poly-log(n/d) mixing time from a warm start. We refer to this random walk as the improved John walk, and denote its transition operator by TJohn+. Let us now give a formal statement of our conjecture on its mixing rate. Conjecture 5 Let µ0 be any M-warm distribution. Then for any δ (0, 1], the improved John walk with parameter r = r0, satisfies the bound T k John+(µ0) π TV δ for all k c d2 logc 2 where r0, c, c are universal constants. Note that this conjecture involves quadratic (degree two) scaling in d; this exponent of two matches the sum of exponents for d and n in the mixing time bounds for both the Dikin and Vaidya walks from a warm-start. Consquently, the improved John walk would have better performance than the Dikin, Vaidya and John walks for almost all ranges of (n, d), apart from possible poly-logarithmic factors in the ratio n/d. 3.5 Proof sketch In this subsection, we provide a high-level sketch of the main ingredients of the main proof. It is well-known that mixing of a Markov chain is closely related to its conductance. Our Fast MCMC Sampling Algorithms on Polytopes main proof relies on the work by Lov asz (1999) that characterizes the conductance of Markov chains on a convex set using Hilbert metric. Precisely, Lov asz (1999) showed that a Markov chain has good conductance if it makes jumps to regions with large overlaps from two nearby points and the mixing time depends inversely on the maximum Hilbert metric between such nearby points. Using this argument, it remains to make sure that the ellipsoid radius is chosen properly such that the ellipsoids remain inside the polytope and the ellipsoids corresponding to two different points x and y overlap a lot even if the points x and y are relatively far apart. The conductance-based argument has been used for analyzing the ball walk (Lov asz and Simonovits, 1990, 1993), Hit-and-run (Lov asz, 1999; Lov asz and Vempala, 2006a) and the Dikin walk (Kannan and Narayanan, 2012; Narayanan, 2016; Sachdeva and Vishnoi, 2016). We refer the reader to the survey by Vempala (2005) for a thorough discussion about the relation between the conductance and mixing time for Markov chains. Our proof techniques share a few features with the recent analyses of the Dikin walk by Kannan and Narayanan (2012) and Sachdeva and Vishnoi (2016). However, new technical ideas are needed in order to handle the state-dependent weights σx and ζx, as defined in equations (9b) and (12) respectively, that underlie the proposal distributions for the Vaidya and John walks. Note that these techniques are not present in the analysis of the Dikin walk, which is based on constant weights. Specifically, we present the proof of Theorem 1 on the mixing time of the Vaidya walk in Section 5 and defer the intermediate technical results to Appendix A, B and C. We present the proof of Theorem 2 (mixing time bound for the John walk) in Appendix D and provide related auxiliary results and their proofs in Appendices E, F, G, H and I. As alluded to earlier, to keep the paper self-contained, we provide the proof of Lov asz s Lemma in Appendix J. 4. Numerical experiments In this section, we first analyze the per-iteration cost to implement of three walks. We show that while the Dikin walk has the best per-iteration cost, the per-iteration cost of the Vaidya walk is only twice of that of Dikin walk and the per-iteration cost of the John walk is only of order log2(2n/d) larger. Second, we demonstrate the speed-up gained by the Vaidya walk over the Dikin walk for a warm start on different polytopes. 4.1 Per iteration cost We now show that the per iteration cost of the Dikin, Vaidya and John walks is of the same order. The proposal step of Vaidya walk requires matrix operations like matrix inversion, matrix multiplication and singular value decomposition (SVD). The accept-reject step requires computation of matrix determinants, besides a few matrix inverses and matrix-vector products. The complexity of all aforementioned operations is O nd2 . Thus, per iteration computational complexity for the Vaidya walk is O nd2 .4 4. In theory, the matrix computations for the Dikin walk can be carried out in time ndν for an exponent ν < 1.373, but such algorithms are not numerically stable enough for practical use. Chen, Dwivedi, Wainwright and Yu Both the Dikin and Vaidya walks requires an SVD computation for inverting the Hessian of Dikin barrier 2Fx. In addition for the Vaidya walk, we have to invert the matrix Vx, which leads to almost twice the computation time of the Dikin walk per step. This difference can be observed in practice. For the John walk, we need to compute the weights ζx at each point which involves solving the program (12). Lee and Sidford (2014) argued that the convex program (12) for obtaining John walk s weights is strongly convex with a suitably chosen norm. They proved that solving this program requires log2 n number of gradient steps, where the computational complexity of each gradient step is equivalent to that of solving an n d linear system (O nd2 using a numerically stable routine). Thus, the overall cost for the John walk is of the same order as of the Dikin walk up to a poly-logarithmic factor in the pair (n, d). In practice, for the John walk, the combined effect of logarithmic factors in the number of steps and the cost to implement each step cannot be ignored. This extra factor becomes a bottleneck for the overall run time for the convergence of the Markov chain. Consequently, the John walk is not suitable for polytopes with moderate values of n and d, and its mixing time bounds are computationally superior to the Dikin and Vaidya walks only for the polytopes with n d 1. 4.2 Simulations We now present simulation results for the random walks in Rd for d = 2, 10 and 50 with initial distribution µ0 = N(0, σ2 d Id) and target distribution being uniform, on the following polytopes: Set-up 1 : The set [ 1, 1]2 defined by different number of constraints. Set-up 2 : The set [ 1, 1]d for d {2, 3, 4, 5, 6, 7} for n = {2d, 2d2, 2d3} constraints. Set-up 3 : Symmetric polytopes in R2 with n-randomly-generated-constraints. Set-up 4 : The interior of regular n-polygons on the unit circle. Set-up 5 : Hyper cube [ 1, 1]d for d = 10 and 50. We choose σd such that the warmness parameter M is bounded by 100. We provide implementations of the Dikin, Vaidya and John walks in python and a jupyter notebook at the github repository https://github.com/rzrsk/vaidya-walk. We use the following three ways to compare the convergence rate of the Dikin and the Vaidya walks: (1) comparing the approximate mixing time of a particular subset of the polytope smaller value is associated with a faster mixing chain; (2) comparing the plot of the empirical distribution of samples from multiple runs of the Markov chain after k steps if it appears more uniform for smaller k, the chain is deemed to be faster; and (3) contrasting the sequential plots of one dimensional projection of samples for a single long run of the chain less smooth plot is associated with effective and fast exploration leading to a faster mixing (Yu and Mykland, 1998). Note that MCMC convergence diagnostics is a hard problem, especially in high dimensions, and since the methods outlined above are heuristic in nature we expect our experiments to not fully match our theoretical results. In Set-up 1, we consider the polytope [ 1, 1]2 which can be represented by exactly 4 linear constraints (see Section 2.4). Suppose that we repeat the rows of the matrix A, and Fast MCMC Sampling Algorithms on Polytopes then run the Dikin and Vaidya walks with the new A. Given the larger number of constraints, our theory predicts that the random walks should mix more slowly. In Figure 3c and 3d, we plot the empirical distribution obtained by the Dikin walk and Vaidya walk, starting from 200 i.i.d initial samples, for n = 64 and 2048. The empirical distribution plot shows that having large n significantly slows the mixing rate of the Dikin walk, while the effect on the Vaidya walk is much less. Further, we also plot the scaling of the approximate mixing time ˆkmix (defined below) for this simulation as a function of the number of constraints n in Figure 3b. For Set-up 2, we plot ˆkmix as a function of the dimensions d in Figures 3e-3g, for the random walks on [ 1, 1]d where the hypercube is parametrized by different number of constraints n {2d, 2d2, 2d3}. The approximate mixing time is defined with respect to the set Sd = {x Rd| |xi| cd i [d]} where cd is chosen such that π (Sd) = 1/2. In particular, for a fixed value of n, let ˆTk denote the empirical measure after k-iterations across 2000 experiments. The approximate mixing time ˆkmix is defined as ˆkmix := min k π (Sd) ˆTk(Sd) 1 We choose such a set since the set covers the regions near to the boundary of the polytope which are not covered well by the chosen initial distribution. We make the following observations: 1. The slopes of the best-fit lines, for ˆkmix versus n in the log-log plot in Figure 3b, are 0.88 and 0.45 for Dikin and Vaidya walks respectively. This observation reflects a near-linear and sub-linear dependence on n for a fixed d for the mixing time of the Dikin walk and the Vaidya walk respectively. 2. In Figures 3e-3g, once again we observe a more significant effect of increasing the number of constraints on the approximate mixing time ˆkmix. We list the slopes of the best fit lines on these log-log plots in Table 2. These slopes correspond to the exponents for d for the approximate mixing time. From the table, we can observe that these experiments agree with the mixing time bounds of O (nd) for the Dikin walk and O n0.5d1.5 for the Vaidya walk. No. of Constraints DW Theoretical VW Theoretical DW Experiments VW Experiments n = 2d 2.0 2.0 1.58 1.72 n = 2d2 3.0 2.5 2.80 2.48 n = 2d3 4.0 3.0 3.84 2.75 Table 2. Value of the exponent of dimensions d for the theoretical bounds on mixing time and the observed approximate mixing time of the Dikin walk (DW) and the Vaidya walk (VW) for [ 1, 1]d described by n = 2d, 2d2, 2d3 constraints. The theoretical exponents are based on the mixing time bounds of O (nd) for the Dikin walk and O n0.5d1.5 for the Vaidya walk. The experimental exponents are based on the results from the simulations described in Set-up 2 in Section 4.2. Clearly, the exponents observed in practice are in agreement with the theoretical rates and imply the faster convergence of the Vaidya walk compared to the Dikin walk for large number of constraints. In Set-up 3, we compare the plots of the empirical distribution of 200 runs of the Dikin walk and the Vaidya walk for different values of k, for symmetric polytopes in R2 with nrandomly-generated-constraints. We fix bi = 1. To generate ai, first we draw two uniform Chen, Dwivedi, Wainwright and Yu random variables from [0, 1] and then flip the sign of both of them with probability 1/2 and assign these values to the vector ai. The resulting polytope is always a subset of the square K = [ 1, 1]2 and contains the diagonal line connecting the points ( 1, 1) and (1, 1). From Figure 4a-4b, we observe that while there is no clear winner for the case n = 64, the Vaidya walk mixes mixes significantly faster than the Dikin walk for the polytope defined by 2048 constraints. In Set-up 4, the constraint set is the regular n-polygons inscribed in the unit circle. A similar observation as in Set-up 3 can be made from Figure 4c-4d: the Vaidya walk mixes at least as fast as the Dikin walk and mixes significantly faster for large n. In Set-up 5, we examine the performance of the Dikin walk and the Vaidya walk on hyper-cube [ 1, 1]d for d = 10, 50. We plot the one dimensional projections onto a random normal direction of all the samples from a single run up to 10, 000 steps. The Vaidya sequential plot looks more jagged than that of the Dikin walk for d = 10, n = 5120. For other cases, we do not have a clear winner. Such an observation is consistent with the O( p n/d) speed up of the Vaidya walk which is apparent when the ratio n/d is large. We begin with auxiliary results in Section 5.1 which we use then to prove Theorem 1 in Section 5.2. Proofs of the auxiliary results are in Sections 5.3 and 5.4, and we defer other technical results to appendices. 5.1 Auxiliary results Our proof proceeds by formally establishing the following property for the Vaidya walk: if two points are close, then their one-step transition distribution are also close. Consequently, we need to quantify the closeness between two points and the associated transition distributions. We measure the distance between two points in terms of the cross ratio that we define next. For a given pair of points x, y K, let e(x), e(y) K denote the intersection of the chord joining x and y with K such that e(x), x, y, e(y) are in order (see Figure 6a). The cross-ratio d K(x, y) is given by d K(x, y) := e(x) e(y) 2 x y 2 e(x) x 2 e(y) y 2 . (18) The ratio d K(x, y) is related to the Hilbert metric on K, which is given by log (1 + d K(x, y)); see the paper by Bushell (1973) for more details. Consider a lazy reversible random walk on a bounded convex set K with transition operator T defined via the mapping µ0 7 µ0/2 + T (µ0)/2 and stationary with respect to the uniform distribution on K (denoted by π ). (Recall that δx denote the dirac-delta distribution with unit mass at x.) The following lemma gives a bound on the mixing-time of the Markov chain. Lemma 6 (Lov asz s Lemma) Suppose that there exist scalars ρ, (0, 1) such that T (δx) T (δy) TV 1 ρ for all x, y int (K) with d K(x, y) < . (19a) Fast MCMC Sampling Algorithms on Polytopes 1.0 0.5 0.0 0.5 1.0 1.0 1.0 initial 1.0 0.5 0.0 0.5 1.0 1.0 101 102 103 n Dikin Vaidya k=10 k=100 k=500 k=1000 k=10 k=100 k=500 k=1000 (d) n = 2048 Dikin Vaidya Dikin Vaidya (f) n = 2d2 Dikin Vaidya (g) n = 2d3 Figure 3. Comparison of the Dikin and Vaidya walks on the polytope K = [ 1, 1]2. (a) Samples from the initial distribution µ0 = N(0, 0.04 I2) and the uniform distribution on [ 1, 1]2. (b) Log-log plot of ˆkmix (17) versus the number of constraints (n) for a fixed dimension d = 2. (c, d) Empirical distribution of the samples for the Dikin walk (blue/top rows) and the Vaidya walk (red/bottom rows) for different values of n at iteration k = 10, 100, 500 and 1000. (e, f, g) Log-log plot of ˆkmix vs the dimension d, for n {2d, 2d2, 2d3} for d {2, 3, 4, 5, 6, 7}. The exponents from these plots are summarized in Table 2. Note that increasing the number of constraints n has more profound effect on the Dikin walk in almost all the cases. Then for every distribution µ0 that is M-warm with respect to π , the lazy transition operator T satisfies T k(µ0) π TV M exp k 2ρ2 k = 1, 2, . . . . (19b) Chen, Dwivedi, Wainwright and Yu k=0 k=10 k=100 k=500 k=1000 k=0 k=10 k=100 k=500 k=1000 (b) n = 2048 k=0 k=10 k=100 k=500 k=1000 k=0 k=10 k=100 k=500 k=1000 (d) n = 2048 Figure 4. Empirical distribution of the samples from 200 runs for the Dikin walk (blue/top rows) and the Vaidya walk (red/bottom rows) at different iterations k. The 2-dimensional polytopes considered are: (a, b) random polytopes with n-constraints, and (c, d) regular n-polygons inscribed in the unit circle. For both sets of cases, we observe that higher n slows down the walks, with visibly more effect on the Dikin walk compared to the Vaidya walk. 0 2000 4000 6000 8000 10000 k 6 Vaidya Dikin 0 2000 4000 6000 8000 10000 k 4.00 Vaidya Dikin 0 2000 4000 6000 8000 10000 k 3.00 Vaidya Dikin 0 2000 4000 6000 8000 10000 k 8.0 Vaidya Dikin 0 2000 4000 6000 8000 10000 k 6 Vaidya Dikin 0 2000 4000 6000 8000 10000 k 8.0 Vaidya Dikin Figure 5. Sequential plots of a one-dimensional random projection of the samples on the hyperbox K = [ 1, 1]d, defined by n constraints. Each plot corresponds to one long run of the Dikin and Vaidya walks, and the projection is taken in a direction chosen randomly from the sphere. (a) Plots for d = 10 and n {20, 640, 5120}. (b) Plots for d = 50 and n {100, 400, 1600}. Relative to the Dikin walk, the Vaidya walk has a more jagged plot for pairs (n, d) in which the ratio n/d is relatively large: for instance, see the plots corresponding to (n, d) = (640, 10) and (5120, 10). The same claim cannot be made for pairs (n, d) for which the ratio n/d is relatively small; e.g., the plot with (n, d) = (20, 10). These observations are consistent with our results that the Vaidya walk mixes more quickly by a factor of order O( p n/d) over the Dikin walk. Fast MCMC Sampling Algorithms on Polytopes Figure 6. Polytope K = {x Rd|Ax b}. (a) The points e(x) and e(y) denote the intersection points of the chord joining x and y with K such that e(x), x, y, e(y) are in order. (b) A geometric illustration of the argument (23). It is straightforward to observe that x y 2/ e(x) x 2 = u y 2/ u v 2 = a i (y x) / bi a i x . This result is implicit in the paper by Lov asz (1999), though not explicitly stated. In order to keep the paper self-contained, we provide a proof of this result in Appendix J. Our proof of Theorem 1 is based on applying Lov asz s Lemma; the main challenge in our work is to establish that our random walks satisfy the condition (19a) with suitable choices of and ρ. In order to proceed with the proof, we require a few additional notations. Recall that the slackness at x was defined as sx := (b1 a 1 x, . . . , bn a n x) . For all x int (K), define the Vaidya local norm of v at x as v Vx := V 1/2 x v 2 = i=1 (σx,i + βV)(a i v)2 s2 x,i , (20a) and the Vaidya slack sensitivity at x as Vx , . . . , an sx,n a 1 V 1 x a1 s2 x,1 , . . . , a n V 1 x an s2x,n Similarly, we define the John local norm of v at x and the John slack sensitivity at x as v Jx := J1/2 x v 2 and θJx := Jx , . . . , an sx,n The following lemma provides useful properties of the leverage scores σx from equation (9b), the weights ζx obtained from solving the program (12), and the slack sensitivities θVx and θJx. Lemma 7 For any x int (K), the following properties hold: (a) σx,i [0, 1] for all i [n], (b) Pn i=1 σx,i = d, Chen, Dwivedi, Wainwright and Yu (c) θVx,i h 0, p n/d i for all i [n], (d) ζx,i [βJ, 1 + βJ] for all i [n], (e) Pn i=1 ζx,i = 3d/2, and (f) θJx,i [0, 4] for all i [n]. We prove this lemma in Section 5.3. Let PV x to denote the proposal distribution of the random walk VW(r) at state x. Next, we state a lemma that shows that if two points x, y int (K) are close in Vaidya local norm at x, then for a suitable choice of the parameter r, the proposal distributions PV x and PV y are close. In addition, we show that the proposals are accepted with high probability at any point x int (K). To establish the latter result, we now define the non-lazy transition operator of the Vaidya walk. Since the Vaidya walk is lazy with probability 1/2, there exists a valid (non-lazy) transition operator TVaidya(r) such that for any distribution µ0, we have TVaidya(r)(µ0) = µ0/2 + TVaidya(r)(µ0)/2. We call TVaidya the non-lazy transition operator for the Vaidya walk. Note that the one-step non-lazy transition distribution TVaidya(r)(δx) denotes the distribution of proposals after the accept-reject step if the chain was not lazy. Thus to establish that proposals are accepted with high probability, it suffices to establish that the transition distribution TVaidya(r)(δx) at any point x K is close to the proposal distribution PV x . We now state these two results formally: Lemma 8 There exists a continuous non-decreasing function f : [0, 1/4] R+ with f(1/15) 10 4 such that for any ϵ (0, 1/15], the random walk VW(r) with r [0, f(ϵ)] satisfies PV x PV y TV ϵ x, y int (K) s.t. x y Vx ϵr 2(nd)1/4 , and (21a) TVaidya(r)(δx) PV x TV 5ϵ x int (K). (21b) See Section 5.4 for the proof of this lemma. With these lemmas in hand, we are now equipped to prove Theorem 1. To simplify notation, for the rest of this section, we adopt the shorthands Tx = TVaidya(r)(δx), Px = PV x and Vx = x. 5.2 Proof of Theorem 1 In order to invoke Lov asz s Lemma for the random walk VW(10 4), we need to verify the condition (19a) for suitable choices of ρ and . Doing so involves two main steps: (A): First, we relate the cross-ratio d K(x, y) to the local norm (20a) at x. (B): Second, we use Lemma 8 to show that if x, y int (K) are close in local-norm, then the transition distributions Tx and Ty are close in TV-distance. Fast MCMC Sampling Algorithms on Polytopes Step (A): We claim that for all x, y int (K), the cross-ratio can be lower bounded as d K(x, y) 1 2d x y x . (22) Note that we have d K(x, y) = e(x) e(y) 2 x y 2 e(x) x 2 e(y) y 2 (i) max x y 2 e(x) x 2 , x y 2 e(y) y 2 (ii) max x y 2 e(x) x 2 , x y 2 e(y) x 2 where step (i) follows from the inequality e(x) e(y) 2 max { e(y) y 2 , e(x) x 2}; and step (ii) follows from the inequality e(x) x 2 e(y) x 2. Furthermore, from Figure 6b, we observe that e(x) x 2 , x y 2 e(y) x 2 = max i [n] This argument of equation (14) has also been used (Sachdeva and Vishnoi, 2016, lemma 9). Note that maximum of a set of non-negative numbers is greater than the mean of the numbers. Combining this fact with properties (a) and (b) from Lemma 7, we find that v u u t 1 Pn i=1 (σx,i + βV) i=1 (σx,i + βV)(a i (x y))2 s2 x,i = x y x thereby proving the claim (22). Step (B): By the triangle inequality, we have Tx Ty TV Tx Px TV + Px Py TV + Py Ty TV. Thus, for any (r, ϵ) such that ϵ [0, 1/15] and r f(ϵ), Lemma 8 implies that Tx Ty TV 11ϵ, x, y int (K) such that x y x rϵ 2(nd)1/4 . Consequently, the walk VW(r) satisfies the assumptions of Lov asz s Lemma with 2d rϵ 2(nd)1/4 and ρ := 1 11ϵ. Since f(1/15) 10 4, we can set ϵ = 1/15 and r = 10 4, whence 2ρ2 = (1 11ϵ)2ϵ2r2 152 1 152 1 10 8 1 d nd 10 12 1 d Observing that < 1 yields the claimed upper bound for the mixing time of Vaidya Walk. Chen, Dwivedi, Wainwright and Yu 5.3 Proof of Lemma 7 In order to prove part (a), observe that for any x int (K), the Hessian 2Fx := Pn i=1 aia i /s2 x,i is a sum of rank one positive semidefinite (PSD) matrices. Also, we can write 2Fx = A x Ax where a 1 /sx,1 ... a n /sx,n Since rank(Ax) = d, we conclude that the matrix 2Fx is invertible and thus, both the matrices 2Fx and 2Fx 1 are PSD. Since σx,i = a i 2Fx 1 ai/s2 x,i, we have σx,i 0. Further, the fact that aia i /s2 x,i 2Fx implies that σx,i 1. Turning to the proof of part (b), from the equality trace(AB) = trace(BA), we obtain i=1 σx,i = trace a i 2Fx 1 ai s2 x,i aia i s2 x,i = trace(Id) = d. Now we prove part (c). Using the fact that σx,i 0, and an argument similar to part (a) we find that that the matrices Vx and V 1 x are PSD. Since θVx,i = a i V 1 x ai/s2 x,i, we have θVx,i 0. It is straightforward to see that βV 2Fx Vx which implies that θVx,i σx,i/β. Further, we also have (σx,i + βV) aia i s2 x,i Vx and whence θVx,i 1/ (σx,i + βV). Combining the two inequalities yields the claim. The other parts of the Lemma follow from Lemma 13, 14 and 15 by Lee and Sidford (2014) and are thereby omitted here. 5.4 Proof of Lemma 8 We prove the lemma for the following function f(ϵ) := min 2 log 1 2 4 18 log(2/ϵ) , r ϵ 86 5/3χ3 , r ϵ 50 where χk = (2e/k log (4/ϵ))k/2 for k = 2, 3 and 4. A numerical calculation shows that f(1/15) 10 4. 5.4.1 Proof of claim (21a) In order to bound the total variation distance Px Py TV, we apply Pinsker s inequality, which provides an upper bound on the TV-distance in terms of the KL divergence: 2 KL(Px Py). For Gaussian distributions, the KL divergence has a closed form expression. In particular, for two normal-distributions G1 = N (µ1, Σ1) and G2 = N (µ2, Σ2), the Kullback-Leibler Fast MCMC Sampling Algorithms on Polytopes divergence between the two is given by KL(G1 G2) = 1 trace(Σ 1/2 1 Σ2Σ 1/2 1 ) d log det(Σ 1/2 1 Σ2Σ 1/2 1 )+(µ1 µ2) Σ 1 1 (µ1 µ2) . Recall from equation (15) that the proposal distribution for Vaidya walk is Gaussian, i.e., Px = N x, r nd V 1 x . Substituting G1 = Px and G2 = Py into the above expression and applying Pinsker s inequality, we find that Px Py 2 TV 2 KL(Py Px) = trace(V 1/2 x Vy V 1/2 x ) d log det(V 1/2 x Vy V 1/2 x ) + nd r2 x y 2 x λi 1 + log 1 nd r2 x y 2 x , (25) where λ1, . . . , λd > 0 denote the eigenvalues of the matrix V 1/2 x Vy V 1/2 x , and we have used the facts that det(V 1/2 x Vy V 1/2 x ) = Qd i=1 λi and trace(V 1/2 x Vy V 1/2 x ) = Pd i=1 λi. The following lemma is useful in bounding expression (25). Lemma 9 For any scalar t [0, 1/12] and any pair x, y int (K) such that x y x t/(nd)1/4, we have 1 8t Id V 1/2 x Vy V 1/2 x 1 + 8t where denotes ordering in the PSD cone, and Id is the d-dimensional identity matrix. See Appendix B for the proof of this lemma. For ϵ (0, 1/15] and r [0, 1/12], we have t = ϵr/2 1/12, whence the eigenvalues {λi, i [d]} can be sandwiched as 1 2 1 4ϵr d λi 1 + 4ϵr d for all i d. (26) We are now ready to bound the TV distance between Px and Py. Using the bound (25) and the inequality log ω ω 1, valid for ω > 0, we obtain nd r2 x y 2 x . Using the assumption that x y x ϵr/ 2(nd)1/4 , and plugging in the bounds (26) for the eigenvalues {λi, i [d]}, we find that nd r2 x y 2 x 32ϵ2r2 + ϵ2 In asserting this inequality, we have used the facts that according to equation (26), for any i [d], λi = (λi 1)2 Note that for any r [0, 1/12] we have that 32r2 1/2. Putting the pieces together yields Px Py TV ϵ, as claimed. Chen, Dwivedi, Wainwright and Yu 5.4.2 Proof of claim (21b) Tx({x}) = Px(Kc) + Z 1 min 1, pz(x) px(z)dz, (27) where Kc denotes the complement of K. Consequently, we find that Px Tx TV = 1 Tx({x}) + Z Rd px(z)dz Z K min 1, pz(x) Rd min 1, pz(x) px(z)dz + 2 Z Kc min 1, pz(x) Px(Kc) | {z } =: S1 min 1, pz(x) | {z } =: S2 Consequently, it suffices to show that both S1 and S2 are small, where the probability is taken over the randomness in the proposal z. In particular, we show that S1 ϵ and S2 4ϵ. Bounding the term S1: Since z is multivariate Gaussian with mean x and covariance r2 nd V 1 x , we can write z d= x + r (nd)1/4 V 1/2 x ξ, (29) where ξ N (0, Id) and d= denotes equality in distribution. Using equation (29) and definition (20b) of θVx,i, we obtain the bound a i (z x) 2 s2 x,i = r2 " a i V 1/2 x ξ sx,i (nd) 1 2 θVx,i ξ 2 2 d ξ 2 2 , (30) where step (i) follows from Cauchy-Schwarz inequality, and step (ii) from the bound on θVx,i from Lemma 7(c). Define the events d ξ 2 2 < 1 and E := {z int (K)} . Inequality (30) implies that E E and hence P [E ] P [E]. Using a standard Gaussian tail bound and noting that r 1 1+ 2/d log(1/ϵ), we obtain P [E] 1 ϵ and whence P [E ] 1 ϵ. Thus, we have shown that P [z / K] ϵ which implies that S1 ϵ. Bounding the term S2: By Markov s inequality, we have min 1, pz(x) αP [pz(x) αpx(z)] for all α (0, 1]. (31) Fast MCMC Sampling Algorithms on Polytopes By definition (15) of px, we obtain pz(x) px(z) = exp z x 2 z z x 2 x + 1 2 (log det Vz log det Vx) The following lemma provides us with useful bounds on the two terms in this expression, valid for any x int (K). Lemma 10 For any ϵ (0, 1/15] and r (0, f(ϵ)], we have 2 log det Vz 1 2 log det Vx ϵ 1 ϵ, and (32a) z x 2 z z x 2 x 2ϵ r2 See Appendix C for the proof of this claim. Using Lemma 10, we now complete the proof. For r f(ϵ), we obtain pz(x) px(z) exp ( 2ϵ) 1 2ϵ with probability at least 1 2ϵ. Substituting α = 1 2ϵ in inequality (31) yields that S2 4ϵ, as claimed. 6. Discussion In this paper, we focused on improving mixing rate of MCMC sampling algorithms for polytopes by building on the advancements in the field of interior point methods. We proposed and analyzed two different barrier based MCMC sampling algorithms for polytopes that outperforms the existing sampling algorithms like the ball walk, the hit-and-run and the Dikin walk for a large class of polytopes. We provably demonstrated the fast mixing of the Vaidya walk, O n0.5d1.5 and the John walk, O d2.5poly-log(n/d) from a warm start. Our numerical experiments, albeit simple, corroborated with our theoretical claims: the Vaidya walk mixes at least as fast the Dikin walk and significantly faster when the number of constraints is quite large compared to the dimension of the underlying space. For the John walk, the logarithmic factors were dominant in all our experiments and thereby we deemed the result of importance only for set-ups with polytopes in very high dimensions with number of constraints overwhelmingly larger than the dimensions. Besides, proving the mixing time guarantees for the improved John walk (Conjecture 5) is still an open question. Narayanan (2016) analyzed a generalized version of the Dikin walk for arbitrary convex sets equipped with self-concordant barrier. From his results, we were able to derive mixing time bounds of O nd4 and O d5poly-log(n/d) from a warm start for the Vaidya walk and the John walk respectively. Our proof takes advantage of the specific structure of the Vaidya and John walk, resulting a better mixing rate upper bound the the general analysis provided by Narayanan (2016). Chen, Dwivedi, Wainwright and Yu While our paper has mainly focused on sampling algorithms on polytopes, the idea of using logarithmic barrier to guide sampling can be extended to more general convex sets. The self-concordance property of the logarithmic barrier for polytopes is extended by Anstreicher (2000) to more general convex sets defined by semidefinite constraints, namely, linear matrix inequality (LMI) constraints. Moreover, Narayanan (2016) showed that for a convex set in Rd defined by n LMI constraints and equipped with the log-determinant barrier the semidefinite analog of the logarithmic barrier for polytopes the mixing time of the Dikin walk from a warm start is O nd2 . It is possible that an appropriate Vaidya walk on such sets would have a speed-up over the Dikin walk. Narayanan and Rakhlin (2013) used the Dikin walk to generate samples from time varying log-concave distributions with appropriate scaling of the radius for different class of distributions. We believe that suitable adaptations of the Vaidya and John walks for such cases would provide significant gains. Acknowledgements This research was supported by Office of Naval Research grant DOD ONR-N00014 to MJW and in part by ARO grant W911NF1710005, NSF-DMS 1613002, the Center for Science of Information (CSo I), a US NSF Science and Technology Center, under grant agreement CCF-0939370 and the Miller Professorship (2016-2017) at UC Berkeley to BY. In addition, MJW was partially supported by National Science Foundation grant NSF-DMS-1612948 and RD was partially supported by the Berkeley Fellowship. Fast MCMC Sampling Algorithms on Polytopes A Auxiliary results for the Vaidya walk 32 A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.2 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B Proof of Lemma 9 34 C Proof of Lemma 10 35 C.1 Auxiliary results for the proof of Lemma 10 . . . . . . . . . . . . . . . . . . 35 C.2 Proof of claim (32a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C.3 Proof of claim (32b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 C.4 Proof of Lemma 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 C.5 Proof of Lemma 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 C.6 Proof of Lemma 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 D Analysis of the John walk 51 D.1 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 D.2 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 D.3 Proof of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 E Technical Lemmas for the John walk 57 E.1 Deterministic expressions and bounds . . . . . . . . . . . . . . . . . . . . . 57 E.2 Tail Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 F Proof of Lemma 5 60 G Proof of Lemma 6 62 G.1 Proof of claim (85a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 G.2 Proof of claim (85b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 H Proofs of Lemmas from Section E.1 69 H.1 Proof of Lemma 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 H.2 Proof of Lemma 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 H.3 Proof of Lemma 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 H.4 Proof of Corollary 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 I Proof of Lemmas from Section E.2 76 I.1 Proof of Lemma 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 I.2 Proof of Lemma 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 J Proof of Lov asz s Lemma 80 Chen, Dwivedi, Wainwright and Yu Appendix A. Auxiliary results for the Vaidya walk In this appendix, we first summarize a few notations used in the proofs related to Theorem 1, and collect the auxiliary results for the later proofs. A.1 Notation We begin with introducing the notation. Recall A Rn d is a matrix with a i as its i-th row. For any positive integer p and any vector v = (v1, . . . , vp) , diag(v) = diag(v1, . . . , vp) denotes a p p diagonal matrix with the i-th diagonal entry equal to vi. Recall the definition of Sx: Sx = diag (sx,1, . . . , sx,n) where sx,i = bi a i x for each i [n]. (33) Furthermore, define Ax = S 1 x A for all x int (K), and let Υx denote the projection matrix for the column space of Ax, i.e., Υx := Ax(A x Ax) 1A x = Ax 2F 1 x A x . (34) Note that for the scores σx (9b), we have σx,i = (Υx)ii for each i [n]. Let Σx be an n n diagonal matrix defined as Σx = diag (σx,1, . . . , σx,n) . (35) Let σx,i,j := (Υx)ij, and let Υ(2) x denote the Hadamard product of Υx with itself, i.e., (Υ(2) x )ij = σ2 x,i,j = a i 2F 1 x aj 2 s2 x,is2 x,j for all i, j [n]. (36) Using the shorthand θx := θVx, we define Θx := diag (θx,1, . . . , θx,m) where θx,i = a i V 1 x ai s2 x,i for i [n], and Ξx := (θ2 x,i,j) where θ2 x,i,j = a i V 1 x aj 2 s2 x,is2 x,j for i, j [n]. In our new notation, we can re-write the Vaidya matrix Vx defined in equation (9a) as Vx = A x (Σx + βVI) Ax, where βV = d/n. A.2 Basic Properties We begin by summarizing some key properties of various terms involved in our analysis. Lemma 11 For any vector x int (K), the following properties hold: (a) σx,i = Pn j=1 σ2 x,i,j = Pn j,k=1 σx,i,jσx,j,kσx,k,i for each i [n], (b) Σx Υ(2) x , Fast MCMC Sampling Algorithms on Polytopes (c) Pn i=1 θx,i (σx,i + βV) = d, (d) i [n], θx,i = Pn j=1 (σx,j + βV) θ2 x,i,j, for each i [n], (e) θ x (Σx + βVI) θx = Pn i=1 θ2 x,i (σx,i + βV) (f) βV 2Fx Vx (1 + βV) 2Fx. where βV = d/n was defined in equation (9b). Proof We prove each property separately. Part (a): Using Id = 2Fx 2Fx 1, we find that σx,i = a i 2Fx 1 2Fx 2Fx 1 ai s2 x,i = a i 2Fx 1 2 Pn j=1 a j aj s2 x,j 2Fx 1 ai i,j=1 σ2 x,i,j. Applying a similar trick twice and performing some algebra, we obtain σx,i = a i 2Fx 1 2Fx 2Fx 1 2Fx 2Fx 1 ai s2 x,i = i,j,k=1 σx,i,jσx,j,kσx,k,i. Part (b): From part (a), we have that Σx Υ(2) x is a symmetric and diagonally dominant matrix with non-negative entries on the diagonal. Applying Gershgorin s theorem (Bhatia, 2013; Horn and Johnson, 2012), we conclude that it is PSD. Part (c): Since trace(AB) = trace(BA), we have i=1 θx,i (σx,i + βV) = trace i=1 (σx,i + βV) aia i s2 x,i = trace (Id) = d. Part (d): An argument similar to part (a) implies that θx,i = a i V 1 x Vx V 1 x ai s2 x,i = a i V 1 x Pn j=1 (σx,i + βV) a j aj s2 x,j V 1 x ai i,j=1 (σx,i + βV) θ2 x,i,j. Part (e): Using part (c) and Lemma 7(c) yields the claim. Part (f): The left inequality is by the definition of Vx. The right inequality uses the fact that Σx Id. We now prove an important result that relates the slackness sx and sy at two points, in terms of x y x. Lemma 12 For all x, y int (K), we have 1 sy,i 4 x y x for each i [n]. Chen, Dwivedi, Wainwright and Yu Proof For any pair x, y int (K) and index i [n], we have a i (x y) 2 = (V 1 1 2 x (x y) 2 (i) V 1 2 x ai 2 2 V 1 2 x (x y) 2 2 = a T i V 1 x ai x y 2 x = θx,is2 x,i x y 2 x ds2 x,i x y 2 x , where step (i) follows from the Cauchy-Schwarz inequality, and step (ii) uses the bound θx,i from Lemma 7(c). Noting the fact that a i (x y) = sy,i sx,i, the claim follows after simple algebra. Appendix B. Proof of Lemma 9 In this appendix section, we prove Lemma 9 using results from the previous appendix. As a direct consequence of Lemma 12, we find that 1 sy,i d , for any x, y int (K) such that x y x t (nd)1/4 . The Hessian 2Fy is thus sandwiched in terms of the Hessian 2Fx as 2 2Fx 2Fy 1 + t By the definition of σx,i and σy,i, we have 2 σx,i σy,i 2 σx,i for all i [n]. (37) Consequently, we find that (1 + ω)4 1 8ω and (1 + ω)2 (1 ω)4 1 + 8ω for any ω 0, 1 Applying this sandwiching pair of inequalities with ω = t/ d yields the claim. Fast MCMC Sampling Algorithms on Polytopes Appendix C. Proof of Lemma 10 We begin by defining ϕx,i := σx,i + βV s2 x,i for i [n], and Ψx := 1 2 log det Vx, for all x int (K) . (38) Further, for any two points x and z, let xz denote the set of points on the line segment joining x and z. The proof of Lemma 10 is based on a Taylor series expansion, and so requires careful handling of σ, ϕ, Ψ and their derivatives. At a high level, the proof involves the following steps: (1) perform a Taylor series expansion around x and along the line segment xz; (2) transfer the bounds of terms involving some point y xz to terms involving only x and z; and then (3) use concentration of Gaussian polynomials to obtain high probability bounds. C.1 Auxiliary results for the proof of Lemma 10 We now introduce some auxiliary results involved in these three steps. The following lemma provides expressions for gradients of σ, ϕ and Ψ and bounds for directional Hessian of ϕ and Ψ. Let ei Rd denote a vector with 1 in the i-th position and 0 otherwise. For any h Rd and x int (K), define ηx,h,i = ηx,i := a i h/sx,i for each i [n]. Lemma 13 The following relations hold; (a) Gradient of σ: σx,i = 2A x (Σx Υ(2) x )ei for each i [n]. (b) Gradient of ϕ: ϕx,i = 2 s2 x,i A x h 2Σx + βV I Υ(2) x i ei for each i [n]; (c) Gradient of Ψ: Ψx = A x 2 Σx + βV I Υ(2) x θx; (d) Bound on 2ϕ: s2 x,i 1 2h 2ϕx,ih 14 (σx,i + βV) η2 x,i+11 Pn j=1 σ2 x,i,jη2 x,j for i [n]; (e) Bound on 2Ψ: 1 2h 2Ψx h 13 Pn i=1 (σx,i + βV) θx,iη2 x,i+17 2 Pn i,j=1 σ2 x,i,jθx,iη2 x,j. See Section C.6 for the proof of this claim. The following lemma that shows that for a random variable z Px, the slackness sz,i is close to sx,i with high probability. Lemma 14 For any ϵ (0, 1/4], r (0, 1) and x int (K), we have i [n], v xz, sx,i sv,i (1 r (1 + δ) , 1 + r (1 + δ)) 1 ϵ/4, where δ = q d . Thus for any d 1 and r 1/ h 20 1 + q ϵ i , we have i [n], v xz, sx,i sv,i (0.95, 1.05) 1 ϵ/4. Chen, Dwivedi, Wainwright and Yu See Section C.4 for the proof which is based on combining the bound on sx,i sv,i from Lemma 12 with standard Gaussian tail bounds. This result comes in handy for transferring bounds for different expressions in Taylor expansion involving an arbitrary y on xz to bounds on terms involving simply x. The proof follows from Lemma 12 and a simple application of the standard Gaussian tail bounds and is thereby omitted. For brevity, we define the shorthand ˆax,i = 1 sx,i V 1/2 x ai for each i [n]. (39) In the following lemma, we state some tail bounds for particular Gaussian polynomials that arise in our analysis. Lemma 15 For any ϵ (0, 1/15], define χk = (2e/k log (4/ϵ))k/2 for k = 2, 3 and 4. Then for ξ N(0, Id) and any x int (K) the following high probability bounds hold: i=1 (σx,i + βV) ˆa x,iξ 2 χ2 i=1 (σx,i + βV) ˆa x,iξ 3 χ3 15 (nd)1/4 # i,j=1 σ2 x,i,j ˆax,i + ˆax,j i=1 (σx,i + βV) ˆa x,iξ 4 χ4 105 (nd)1/2 # See Section C.5 for the proof of these claims. Now we summarize the final ingredients needed for our proofs. Recall that the Gaussian proposal z is related to the current state x via the equation z d= x + r (nd)1/4 V 1/2 x ξ, (41) where ξ N (0, Id). We also use the following elementary inequalities: Cauchy-Schwarz inequality: |u v| u 2 v 2 (C-S) AM-GM inequality: νκ 1 2(ν2 + κ2). (AM-GM) Sum of squares inequality: 1 2 a + b 2 2 a 2 2 + b 2 2 , (SSI) Note that the sum-of-squares inequality is simply a vectorized version of the AM-GM inequality. With these tools, we turn to the proof of Lemma 10. We split our analysis into parts. Fast MCMC Sampling Algorithms on Polytopes C.2 Proof of claim (32a) Using the second degree Taylor expansion, we have Ψz Ψx = (z x) Ψx + 1 2 (z x) 2Ψy (z x) , for some y xz. We claim that for r f(ϵ), we have Pz h (z x) Ψx ϵ/2 i 1 ϵ/2, and (42a) 2 (z x) 2Ψy (z x) ϵ/2 1 ϵ/2. (42b) Note that the claim (32a) is a consequence of these two auxiliary claims, which we now prove. C.2.1 Proof of bound (42a) Equation (41) implies that (z x) Ψx N 0, r2 nd Ψ x V 1 x Ψx . We claim that Ψ x V 1 x Ψx 9 nd for all x int (K) . (43) We prove this inequality at the end of this subsection. Taking it as given for now, let ξ N(0, 9r2). Then using inequality (43) and a standard Gaussian tail bound, we find that P h (z x) Ψx ω i P ξ ω 1 exp( ω2/(18r2)), valid for all ω 0. Setting ω = ϵ/2 and noting that r ϵ 18 log(2/ϵ) completes the claim. C.2.2 Proof of bound (42b) Let ηx,i = a i (z x) (mn) 1 4 ˆa x,iξ. Using Lemma 13(e), we have 1 2 (z x) 2Ψy (z x) 13 i=1 (σy,i + βV) θy,i s2 x,i s2 y,i η2 x,i + 17 i,j=1 σ2 y,i,jθy,i s2 x,j s2 y,j η2 x,j i=1 (σx,i + βV) (σy,i + βV) (σx,i + βV) s2 x,i s2 y,i η2 x,i. (44) The last inequality comes from Lemma 7(c) and Lemma 11(a). Setting τ = 1.05, we define the events E1 and E2 as follows: E1 = i [n], sx,i sy,i [2 τ, τ] , and (45a) E2 = i [n], σx,i σy,i 0, τ 2 Chen, Dwivedi, Wainwright and Yu It is straightforward to see that E1 E2 following a similar argument we used to obtain equation (37) in the proof of Lemma 9. Since r 1/ h 20 1 + ϵ i , Lemma 14 implies that P [E1] 1 ϵ/4 whence P [E2] 1 ϵ/4. Using these high probability bounds and the setting τ = 1.05, we obtain that with probability at least 1 ϵ/4 rn i=1 (σx,i + βV) (σy,i + βV) (σx,i + βV) s2 x,i s2 y,i η2 x,i 2 rn i=1 (σx,i + βV) η2 x,i = 2r2 i=1 (σx,i + βV) (ˆa x,iξ)2. Applying the high probability bound Lemma 15 (40a) and the condition we obtain that with probability at least 1 ϵ/2, 1 2 (z x) 2Ψy (z x) ϵ/2, as claimed. C.2.3 Proof of bound (43) We now return to prove our earlier inequality (43). Using the expression for the gradient Ψx from Lemma 13(c), we have that for any vector u Rn u Ψx Ψ x u = D u, A x 2Σx Υ(2) x + βVI θx E2 = D Axu, 2Σx Υ(2) x + βVI θx E2 = D (Σx + βVI) 1 2 Axu, (Σx + βVI) 1/2 2Σx Υ(2) x + βVI θx E2 u Vxu θ x 2Σx Υ(2) x + βVI (Σx + βVI) 1 2Σx Υ(2) x + βVI θx (48) where the last step follows from the Cauchy-Schwarz inequality. As a consequence of Lemma 11(b), the matrix Σx Υ(2) x is PSD. Thus, we have 0 2Σx Υ(2) x + βVI 3 (Σx + βVI) . Consequently, we find that 0 (3Σx + 3βVI) 1/2 2Σx Υ(2) x + βVI (3Σx + 3βVI) 1/2 We deduce that all eigenvalues of the matrix L lie in the interval [0, 1] and hence all the eigenvalues of the matrix L2 belong to the interval [0, 1]. As a result, we have 2Σx Υ(2) x + βVI (3Σx + 3βVI) 1 2Σx Υ(2) x + βVI (3Σx + 3βVI) . Fast MCMC Sampling Algorithms on Polytopes Thus, we obtain θ x 2Σx Υ(2) x + βVI (Σx + βVI) 1 2Σx Υ(2) x + βVI θx 9θ x (Σx + βVI) θx. (49) Finally, applying Lemma 11 and combining bounds (48) and (49) yields the claim. C.3 Proof of claim (32b) The quantity of interest can be written as z x 2 z z x 2 x = a i (z x) 2 (ϕz,i ϕx,i) . We can write z = x + αu, where α is a scalar and u is a unit vector in Rd. Then we have z x 2 z z x 2 x = α2 n X a i u 2 (ϕz,i ϕx,i) . We apply a Taylor series expansion for Pn i=1 a i u 2 (ϕz,i ϕx,i) around the point x, along the line u. There exists a point y xz such that a i u 2 (ϕz,i ϕx,i) = a i u 2 (z x) ϕx,i + 1 2 (z x) 2ϕy,i (z x) . Multiplying both sides by α2, and using the shorthand ηx,i = a i (z x) sx,i , we obtain z x 2 z z x 2 x = i=1 η2 x,is2 x,i (z x) ϕx,i + i=1 η2 x,is2 x,i 1 2 (z x) 2ϕy,i (z x) . (50) Substituting the expression for ϕx,i from Lemma 13(b) in equation (50) and performing some algebra, the first term on the RHS of equation (50) can be written as i=1 η2 x,is2 x,i(z x) ϕx,i = 2 i,j=1 σ2 x,i,j (ηx,i + ηx,j)3 . (51) On the other hand, using Lemma 13 (d), we have 1 2s2 x,i (z x) 2ϕy,i (z x) s2 x,i s2 y,i 14 (σy,i + βV) s2 x,i s2 y,i η2 x,i + 11 j=1 σ2 y,i,jη2 x,j s2 x,j s2 y,j Now, we use a fourth degree Gaussian polynomial to bound both the terms on the RHS of inequality (52). To do so, we use high probability bound for sx,i/sy,i. In particular, we use the high probability bounds for the events E1 and E2 defined in equations (45a) and (45b). Chen, Dwivedi, Wainwright and Yu Multiplying both sides of inequality (52) by η2 x,i and summing over the index i, we obtain that with probability at least 1 ϵ/4, we have i=1 η2 x,is2 x,i 1 2 (z x) 2ϕy,i (z x) i=1 (σy,i + βV) s4 x,i s4 y,i η4 x,i + 11 i,j=1 σ2 y,i,jη2 x,iη2 x,j s2 x,is2 x,j s2 y,is2 y,j (hpb.(45a)) τ 4 14 i=1 (σy,i + βV) η4 x,i + 11 i,j=1 σ2 y,i,jη2 x,iη2 x,j (AM GM) τ 4 14 i=1 (σy,i + βV) η4 x,i + 11 i,j=1 σ2 y,i,j(η4 x,i + η4 x,j) (Lem. 11(a)) 25τ 4 n X i=1 (σy,i + βV) η4 x,i (hpb.(45b)) 50 i=1 (σx,i + βV) η4 x,i, (53) where hpb stands for high probability bound for events E1 and E2. In the last step, we have used the fact that τ 6/(2 τ)2 2 for τ = 1.05. Combining equations (50), (51) and (53) and noting that ηx,i = rˆa i ξ/(nd)1/4, we find that z x 2 z z x 2 x 14 i=1 (σx,i + βV) η3 x,i i,j=1 σ2 x,i,j ((ηx,i + ηx,j) /2)3 i=1 σx,iη4 x,i i=1 (σx,i + βV) ˆa x,iξ 3 + 8 i,j=1 σ2 x,i,j 2(ˆax,i + ˆax,j) ξ 3 i=1 (σx,i + βV) (ˆa x,iξ)4, (54) where the last step follows from the fact that 0 σx,i σx,i + βV. In order to show that z x 2 z z x 2 x is bounded as O 1/ nd with high probability, it suffices to show that with high probability, the third and fourth degree polynomials of ˆa x,iξ, that appear in bound (54), are bounded by O (nd)1/4 and O nd respectively. Applying the bounds (40b), (40c) and (40d) from Lemma 15, we have with probability at least 1 ϵ, z x 2 z z x 2 x r3 Using the condition 5/3χ3 , r ϵ 50 completes our proof of claim (32b). Fast MCMC Sampling Algorithms on Polytopes C.4 Proof of Lemma 14 The proof is based on Lemma 12 and a simple application of the standard chi-square tail bounds. According to Lemma 12, we have that for v xz, 1 sv,i According to equation (41), the proposal follows Gaussian distribution n 4 x z x = r d1/2 ξ 2 , where ξ N (0, Id). Using the standard chi-square tail bound we have that for δ > 0, d 1 + δ i exp dδ2/2 . Plugging in δ = q 2 d log 1 2 4 ϵ concludes the lemma. C.5 Proof of Lemma 15 The proof relies on the classical fact that the tails of a polynomial in Gaussian random variables decay exponentially independently of dimension. In particular, Theorem 6.7 by Janson (1997) ensures that for any integers d, k 1, any polynomial f : Rd R of degree k, and any scalar t (2e)k/2, we have P |f(ξ)| t Ef(ξ)2 1 2et2/k , (56) where ξ N(0, In) denotes a standard Gaussian vector in n dimensions. Also, the following observations on the behavior of the vectors ˆax,i defined in equation (39) are useful: ˆax,i 2 2 = θx,i (i) rn d for all i [n], and (57a) (ˆa x,iˆax,j)2 = θ2 x,i,j for all i, j [n], (57b) where inequality (i) follows from Lemma 7 (c). C.5.1 Proof of bound (40a) i=1 (σx,i + βV) ˆa x,iξ 2 !2 = i,j=1 (σx,i + βV) (σx,j + βV) E ˆa x,iξ 2 ˆa x,jξ 2 i,j=1 (σx,i + βV) (σx,j + βV) ˆax,i 2 2 ˆax,j 2 2 + 2 ˆa x,iˆax,j 2 i,j=1 (σx,i + βV) (σx,j + βV) θx,iθx,j + 2θ2 x,i,j (i) = d2 + 2d Chen, Dwivedi, Wainwright and Yu where step (i) follows from properties (c) and (d) from Lemma 11. Applying the bound (56) with k = 2, t = e log(4 ϵ) yields the claim. We verify that for ϵ (0, 1/15], t 2e. C.5.2 Proof of bound (40b) Using Isserlis theorem (Isserlis, 1918) for Gaussian moments, we obtain i=1 (σx,i + βV) ˆa x,iξ 3 !2 = i,j=1 (σx,i + βV) (σx,i + βV) E ˆa x,iξ 3 ˆa x,jξ 3 i,j=1 (σx,i + βV) (σx,j + βV) ˆax,i 2 2 ˆax,j 2 2 ˆa x,iˆax,j | {z } =:N1 i,j=1 (σx,i + βV) (σx,j + βV) ˆa x,iˆax,j 3 | {z } =:N2 We claim that the two terms in this sum are bounded as N1 nd. Assuming the claims as given, we now complete the proof. Plugging in the bounds for N1 and N2 in equation (58) we find that E Pn i=1 (σx,i + βV) ˆa x,iξ 3 2 15 nd. Applying the bound (56) with k = 3, t = 2e 3 log(4/ϵ) 3/2 yields the claim. We also verify that for ϵ (0, 1/15], t (2e)3/2. We now turn to proving the bounds on N1 and N2. Bounding N1: Let B be an n d matrix with its i-th row given by p (σx,i + βV)ˆa x,i. Observe that i=1 (σx,i + βV) ˆaiˆa x,i = V 1/2 x i=1 (σx,i + βV) aia i s2 x,i V 1/2 x = V 1/2 x Vx V 1/2 x = Id. (59) Thus we have B B = Id, which implies that BB is an orthogonal projection matrix. Letting v Rn be a vector such that vi = p (σx,i + βV) ˆax,i 2 2, we then have i,j=1 (σx,i+βV) ˆax,i 2 2 ˆa x,i (σx,j+βV) ˆax,j 2 2 ˆax,j = i=1 (σx,i+βV) ˆax,i 2 2 ˆax,i (i) v 2 2 , where inequality (i) follows from the fact that v Pv v 2 2 for any orthogonal projection matrix P. Equation (57a) implies that v2 i = (σx,i + βV) θ2 x,i. Using Lemma 11(e), we find that i=1 (σx,i + βV) θ2 x,i Fast MCMC Sampling Algorithms on Polytopes Bounding N2: We see that i,j=1 (σx,i + βV) (σx,j + βV) ˆa x,iˆax,j 3 (C S) i,j=1 (σx,i + βV) (σx,j + βV) ˆa x,iˆax,j 2 ˆax,i 2 ˆax,j 2 (eqns.(57a),(57b)) i,j=1 (σx,i + βV) (σx,j + βV) θ2 x,i,j p (Lem. 7(c)) rn i,j=1 (σx,i + βV) (σx,j + βV) θ2 x,i,j. We now apply Lemma 11(d) followed by Lemma 11(c) to obtain the claimed bound on N2. C.5.3 Proof of bound (40c) Let ci,j = (ˆax,i + ˆax,j) 2 for i, j [n]. Using Isserlis theorem for Gaussian moments, we obtain i,j=1 σ2 x,i,j c i,jξ 3 i,j,k,l=1 σ2 x,i,jσ2 x,k,l E c i,jξ 3 c k,lξ 3 i,j,k,l=1 σ2 x,i,jσ2 x,k,l ci,j 2 2 ck,l 2 2 c i,jck,l | {z } =: C1 i,j,k,l=1 σ2 x,i,jσ2 x,k,l c i,jck,l 3 | {z } =: C2 We claim that C1 nd. Assuming the claims as given, the result follows using similar arguments as in the previous part. We now bound Ci, i = 1, 2, using arguments similar to the ones used in Section C.5.2 to bound Ni, i = 1, 2, respectively. The following bounds on ci,j 2 2 are used in the arguments that follow: ˆai 2 2 + ˆaj 2 2 = 1 2 (θx,i + θx,j) (60a) Lem. 7(c) rn Bounding C1: Let B be the same n d matrix as in the proof of previous part with its i-th row given by p (σx,i + βV)ˆa x,i. Define the vector u Rd with entries given by Chen, Dwivedi, Wainwright and Yu ui = Pn j=1 σ2 x,i,j ci,j 2 2/(σx,i + βV)1/2. We have i,j,k,l=1 σ2 x,i,jσ2 x,k,l ci,j 2 2 ck,l 2 2 c i,jck,l i,j=1 σ2 x,i,j ci,j 2 2 ci,j i,j=1 σ2 x,i,j ci,j 2 2 ˆax,i i,j=1 σ2 x,i,j ci,j 2 2 ˆax,j 2 (i) u 2 2 , where inequality (i) follows from the fact that v Pv v 2 2 for any orthogonal projection matrix P. It is left to bound the term u2 i . We see that u2 i = 1 σx,i + βV j,k=1 σ2 x,i,jσ2 x,i,k ci,j 2 2 ci,k 2 2 (bnd. (60b)) rn d 1 σx,i + βV j,k=1 σ2 x,i,jσ2 x,i,k ci,j 2 2 (Lem. 11(a)) rn d σx,i σx,i + βV j=1 σ2 x,i,j ci,j 2 2 (bnd. (60a)) rn j=1 σ2 x,i,j θx,i + θx,j Now, summing over i and using symmetry of indices i, j, we find that j=1 σ2 x,i,jθx,i (Lem. 11(a)) = rn i=1 σx,iθx,i (Lem. 11(c)) thereby implying that C1 Bounding C2: Using the Cauchy-Schwarz inequality and the bound (60b), we find that i,j,k,l=1 σ2 x,i,jσ2 x,k,l c i,jck,l 3 i,j,k,l=1 σ2 x,i,jσ2 x,k,l c i,jck,l 2 ci,j 2 ck,l 2 i,j,k,l=1 σ2 x,i,jσ2 x,k,l c i,jck,l 2 . Using SSI and the symmetry of pairs of indices (i, j) and (k, l), we obtain i,j,k,l=1 σ2 x,i,jσ2 x,k,l c i,jck,l 2 i,j,k,l=1 σ2 x,i,jσ2 x,k,l ˆa x,iˆak 2 = i,k=1 σx,iσx,k ˆa x,iˆak 2 . The resulting expression can be bounded as follows: i,k=1 σx,iσx,k ˆa x,iˆak 2 (eqn.(57b)) = i,k=1 σx,iσx,kθ2 x,i,k (Lem. 11(d)) i=1 σx,iθx,i (Lem. 11(c)) n. Putting the pieces together yields the claimed bound on C2. Fast MCMC Sampling Algorithms on Polytopes C.5.4 Proof of bound (40d) Observe that ˆa x,iξ N (0, θx,i) and hence E ˆa x,iξ 8 = 105 θ4 x,i. Thus we have i=1 σx,i ˆa x,iξ 4 !2 C S i,j=1 σx,iσx,j E ˆa x,iξ 8 1 2 E ˆa x,jξ 8 1 i,j=1 σx,iσx,jθ2 x,iθ2 x,j i=1 σx,iθ2 x,i (Lem. 11(e)) 105nd. Applying the bound (56) with k = 4, t = e 2 log(4/ϵ) 2 yields the result. We also verify that for ϵ (0, 1/15], we have t (2e)2 C.6 Proof of Lemma 13 We now derive the different expressions for derivatives and prove the bounds for Hessians of x 7 ϕx,i, i [n] and x 7 Ψx. In this section we use the simpler notation Hx := 2Fx. C.6.1 Gradient of σ Using sx+h,i = (bi a i (x + h)) = sx,i a i h, we define the Hessian difference matrix H x,h := Hx+h Hx = 1 (sx,i a i h)2 1 Up to second order terms, we have 1 s2 x+h,i = 1 s2 x,i 1 + 2a i h sx,i + 3(a i h)2 + O h 3 2 , (62a) aia i s2 x,i " 2a i h sx,i + 3(a i h)2 + O h 3 2 , (62b) a T i H 1 x+hai = a i H 1 x ai a i H 1 x H x,h H 1 x ai + a i H 1 x H x,h H 1 x H x,h H 1 x ai + O h 3 2 . Chen, Dwivedi, Wainwright and Yu Collecting different first order terms in σx+h,i σx,i, we obtain σx+h,i σx,i = 2 a i H 1 x ai s2 x,i a i h sx,i 2 a i H 1 x Pn j=1 aja j s2 x,j s2 x,i + O h 2 2 σx,i a i h sx,i j=1 σ2 x,i,j a j h sx,j = 2 [(Σx Υ(2) x )S 1 x A]i h + O h 2 2 . Dividing both sides by h and letting h 0 yields the claim. C.6.2 Gradient of ϕ Using the chain rule and the fact that sx,i = ai, we find that ϕx,i = σx,i s2 x,i 2 (σx,i + βV) sx,i = 2 s2 x,i A S 1 x h 2Σx + βV I Υ(2) x i ei, as claimed. C.6.3 Gradient of Ψ For convenience, let us restate equations (39) and (59): ˆax,i = 1 sx,i V 1/2 x ai, and i=1 (σx,i + βV) ˆax,iˆa x,i = Id. For a unit vector h, we have h log det Vx = lim δ 0 1 δ (σx+δh,i + βV) 1 δa i h/sx,i 2 ˆax,iˆa x,i i=1 (σx,i + βV) ˆax,iˆa x,i Let log L denote the logarithm of the matrix L. Keeping track of the first order terms on RHS of equation (63), we find that i=1 (σx+δh,i + βV) ˆax,iˆa x,i 1 δa i h/sx,i 2 i=1 (σx,i + βV) ˆax,iˆa x,i σx+δh,i + βV + δh σx,i 1 + 2δa i h s2 x,i i=1 (σx,i + βV) ˆax,iˆa x,i 2 (σx,i + βV) a i h s2 x,i + h σx,i ˆax,iˆa x,i 2 (σx,i + βV) a i h s2 x,i + h σx,i Fast MCMC Sampling Algorithms on Polytopes where we have used the fact trace(log I) = 0. Letting δ 0 and substituting expression of h σx from part (a), we obtain h log det Vx = A x 4Σx + 2βVI 2Υ(2) x Θxh. C.6.4 Bound on Hessian 2ϕ In terms of the shorthand Eii = eie i , we claim that for any h Rd, h 2ϕx,ih = 2 s2 x,i h A x Eii 3 (Σx + βVI) + 7Σx 8 diag(Υ(2) x ei) Eii + diag(Υxei)(4Υx 3I) diag(Υxei) Axh. (64) ϕx+h,i ϕx,i = a i H 1 x+h,iai s4 x+h,i a i H 1 x,i ai s4 x,i | {z } =:A1 1 s2 x+h,i 1 | {z } =:A2 The second order Taylor expansion of 1/s4 x,i is given by 1 s4 x+h,i = 1 s4 x,i 1 + 4a i h sx,i + 10(a i h)2 + O h 3 2 . Let B1 and B2 denote the second order terms, i.e., the terms that are of order O h 2 2 , in Taylor expansion of A1 and A2 around x, respectively. Borrowing terms from equations (62a)-(62c) and simplifying we obtain B1 = 10σx,i (a i h)2 s2 x,i 8a i h sx,i σ2 x,i,j s2 x,i a j h sx,j 3 σ2 x,i,j s2 x,i sx,i σx,j,l σx,l,i a l h sx,l , and B2 = 3βV (a i h)2 Observing that the second order term in the Taylor expansion of ϕx+h,i around x, is exactly 1 2h 2ϕx,ih yields the claim (64). We now turn to prove the bound on the directional Chen, Dwivedi, Wainwright and Yu Hessian. Recall ηx,i = a i h/sx,i. We have 1 2h 2ϕx,ih 3 (σx,i+βV) η2 x,i+7σx,iη2 x,i 8 j=1 σ2 x,i,jηx,jηx,i 3 j=1 σ2 x,i,jη2 x,j+4 j,k=1 σx,i,jσx,j,kσx,k,iηx,jηx,k (i) 10 (σx,i + βV) η2 x,i + 8 j=1 σ2 x,i,j |ηx,iηx,j| + 7 j=1 σ2 x,i,jη2 x,j (ii) 10 (σx,i + βV) η2 x,i + 4 j=1 σ2 x,i,j η2 x,i + η2 x,j + 7 j=1 σ2 x,i,jη2 x,j (iii) 10 (σx,i + βV) η2 x,i + 4 j=1 σx,iη2 x,i + 4 j=1 σ2 x,i,jη2 x,j + 7 j=1 σ2 x,i,jη2 x,j, (iv) 14 (σx,i + βV) η2 x,i + 11 j=1 σ2 x,i,jη2 x,j, where step (i) follows from the fact that diag(Υyei)Υy diag(Υyei) diag(Υyei) diag(Υyei) since Υy is an orthogonal projection matrix; step (ii) follows from AM-GM inequality; step (iii) follows from the symmetry of indices i and j and Lemma 11(a), and step (iv) from the fact that σx,i σx,i + βV. C.6.5 Bound on Hessian 2Ψ 1 2h 2 log det Vx h =1 2 lim δ 0 1 δ2 (σx+δh,i + βV) 1 δa i h/sx,i 2 ˆax,iˆa x,i + trace log (σx δh,i + βV) 1 + δa i h/sx,i 2 ˆax,iˆa x,i 2 trace log i=1 (σx + βV) ˆax,iˆa x,i Fast MCMC Sampling Algorithms on Polytopes Up to second order terms, we have i=1 (σx+δh,i + βV) ˆax,iˆa x,i 1 δa i h/sx,i 2 σx,i + βV + δh σx,i + 1 2δ2h 2σx,ih 1 + 2δa i h sx,i + 3δ2 a i h sx,i ˆax,iˆa x,i σx,i + βV + δh σx,i + 1 2δ2h 2σx,ih 1 + 2δa i h sx,i + 3δ2 a i h sx,i ˆax,iˆa x,i σx,i + βV + δh σx,i + 1 2δ2h 2σx,ih 1 + 2δa i h sx,i + 3δ2 a i h sx,i ˆax,iˆa x,i We can similarly obtain the second order expansion of the term trace log Pn i=1 (σx δh,i+βV) (1+δa i h/sx,i) 2 ˆax,iˆa x,i Recall ηx,i = a i h sx,i . Using part (a) to substitute h σx,i, we obtain 1 2h 2 log det Vx h = 3 (σx,i + βV) η2 x,i + 4 j=1 σ2 x,i,jηx,iηx,j i,j=1 (2σx,i + βV) (2σx,j + βV) ηx,iηx,jθ2 x,i,j 2 i,j,k=1 (2σx,i + βV) σ2 x,j,kθ2 x,i,kηx,iηx,j i,j,k,l=1 σ2 x,i,lσ2 x,j,kθ2 x,k,lηx,iηx,j We claim that the directional Hessian h 2σx,ih is given by h 2σx,ih = 2 h A x h Eii(3Σx 4 diag(Υ(2) x ei))Eii + diag(Υxei)(4Υx 3I) diag(Υxei) i Axh. Assuming the claim at the moment we now bound h 2Ψxh . To shorten the notation, we drop the x-dependence of the terms σx,i, σx,i,j, θx,i and ηx,i. Since Υx is an orthogonal projection matrix, we have diag(Υxei)Υx diag(Υxei) diag(Υxei) diag(Υxei). Using this fact and substituting the expression for h 2σx,ih from equation (68) in equation (67), we obtain h 2Ψxh η2 i + 4 σiη2 i + j=1 σ2 i,jηiηj + 3σiη2 i + 4 j=1 σ2 i,jηiηj + 7 j=1 σ2 i,jη2 j i,j=1 (σi + βV) (σj + βV) ηiηjθ2 i,j + 8 i,j,k=1 (σi + βV) σ2 j,kθ2 i,kηiηj + 2 i,j,k,l=1 σ2 i,lσ2 j,kθ2 k,lηiηj Chen, Dwivedi, Wainwright and Yu Rearranging terms, we find that 10 (σi + βV) η2 i + 8 j=1 σ2 i,jηiηj + 7 j=1 σ2 i,jη2 j i,j=1 (σi + βV) (σj + βV) ηiηjθ2 i,j + 8 i,j,k=1 (σi + βV) σ2 j,kθ2 i,kηiηj + 2 i,j,k,l=1 σ2 i,lσ2 j,kθ2 k,lηiηj 10 (σi + βV) η2 i + 4 j=1 σ2 i,j η2 i + η2 j + 7 j=1 σ2 i,jη2 j σi + βV σj + βV θ2 i,j(η2 i + η2 j ) + 4 σi + βV σ2 j,kθ2 i,k(η2 i + η2 j ) + i,j,k,l=1 σ2 i,lσ2 j,kθ2 k,l(η2 i + η2 j ) where in step (i) we have used the AM-GM inequality. Simplifying further, we obtain 14 (σi + βV) η2 i + 11 j=1 σ2 i,jη2 j i=1 12 (σi + βV) θiη2 i + i,j=1 6σ2 i,jθiη2 j i=1 (σi + βV) θiη2 i + 17 i,j=1 σ2 i,jθiη2 j . Dividing both sides by two completes the proof. Proof of claim (68): In order to compute the directional Hessian of x 7 σx,i, we need to track the second order terms in equations (62a)-(62c). Collecting the second order terms (denoted by σ(2) h ) in the expansion of σx+h,i σx,i, we obtain σ(2) h = 3 a i H 1 x ai s2 x,i s2 x,i 4 a i H 1 x Pn j=1 aja j s2 x,j 3 a i H 1 x Pn j=1 aja j s2 x,j + 4 a i H 1 x Pn j=1 aja j s2 x,j Pn l=1 ala l s2 x,l We simply each term on the RHS one by one. Simplifying the first term, we obtain 3 a i H 1 x ai s2 x,i s2 x,i = 3 σx,iη2 x,i = h 3 A x EiiΣx Eii Ax h. Fast MCMC Sampling Algorithms on Polytopes For the second term, we have 4 a i H 1 x Pn j=1 aja j s2 x,j a i h sx,i = 4 ηx,i j=1 σ2 x,i,j ηx,j = 4 h A x Eii diag Υ(2) x ei Eii Axh. The third term can be simplified as follows: 3 a i H 1 x Pn j=1 aja j s2 x,j j=1 σ2 x,i,jη2 x,j = 3 h A x diag (Υxei) diag (Υxei) Axh For the last term, we find that 4 a i H 1 x Pn j=1 aja j s2 x,j Pn l=1 ala l s2 x,l j,l=1 σx,i,j σx,j,l σx,l,i ηx,j ηx,l = 4 h A x diag (Υxei) Υx diag (Υxei) Axh. Putting together the pieces yields the expression (68). Appendix D. Analysis of the John walk We recap the key ideas of the John walk for convenience. We have designed a new proposal distribution by making use of an optimal set of weights to define the new covariance structure for the Gaussian proposals, where optimality is defined with respect to the convex program defined below (69). The optimality condition is closely related to the problem of finding the largest ellipsoid at any interior point of the polytope, such that the ellipsoid is contained within the polytope. This problem of finding the largest ellipsoid was first studied by John (1948) who showed that each convex body in Rd contains a unique ellipsoid of maximal volume. More recently, Lee and Sidford (2014) make use of approximate John Ellipsoids to improve the convergence rate of interior point methods for linear programming. We refer the readers to their paper for more discussion about the use of John Ellipsoids for optimization problems. In this work, we make use of these ellipsoids for designing sampling algorithms with better theoretical bounds on the mixing times. The vector ζx = (ζx,1, . . . , ζx,n) defined in the John walk s inverse covariance matrix (11) is computed by solving the following optimization problem: ζx = arg min w Rn cx (w) := αJ log det A S 1 x W αJS 1 x A βJ i=1 log wi, (69) where the parameters αJ, βJ are given by αJ = 1 1 log2 (2n/d) and βJ = d Chen, Dwivedi, Wainwright and Yu and W denotes an n n diagonal matrix with Wii = wi for each i [n]. In particular, for our proposal the inverse covariance matrix is proportional to Jx, where i=1 ζx,i aia i (bi a i x)2 . (70) where κ := κn,d = log2(2n/d) = (1 αJ) 1. Recall that for John walk with parameter r d3/4κ2 , the proposals at state x are drawn from the multivariate Gaussian distribution given by N x, r2 d3/2κ4 J 1 x , which we denote by PJ x. In particular, the proposal density at point x int (K) is given by px(z) := p(x, z) = p 2r2 (z x) Jx(z x) Here we restate our result for the mixing time of the John walk. Theorem 2 Let µ0 be any distribution that is M-warm with respect to π and let n < exp( d). Then for any δ (0, 1], the John walk with parameter r John = 10 5 satisfies T k John(r)(µ0) π TV δ for all k C d2.5 log4 2 D.1 Auxiliary results We begin by proving basic properties of the weights ζx which are used throughout the paper. For x int (K) , w Rn ++, define the projection matrix Υx,w as follows Υx,w = W α/2Ax(A x W αAx) 1A x W α/2, (72) where Ax = S 1 x A and W is the n n diagonal matrix with i-th diagonal entry given by wi. Also, let σx,i := (Υx,ζx)ii for x int (K) and i [n]. (73) Define the John slack sensitivity θJ x as θx := θJ x := a 1 J 1 x a1 s2 x,1 , . . . , a n J 1 x an s2x,n ! for all x int (K) . (74) Further, for any x int (K), define the John local norm at x as Jx : v 7 J1/2 x v 2 = i=1 ζx,i (a i v)2 s2 x,i . (75) We now collect some basic properties of the weights ζx and the local sensitivity θx and restate parts of Lemma 7 for clarity here. Fast MCMC Sampling Algorithms on Polytopes Lemma 3 For any x int (K), the following properties are true: (a) (Implicit weight formula) ζx,i = σx,i + βJ for all i [n], (b) (Uniformity) ζx,i [βJ, 1 + βJ] for all i [n], (c) (Total size) Pn i=1 ζx,i = 3d/2, and (d) (Slack sensitivity) θx,i [0, 4] for all i [n]. Lemma 3 follows from Lemmas 14 and 15 by Lee and Sidford (2014) and thereby we omit its proof. Next, we state a key lemma that is crucial for proving the convergence rate of John walk. In this lemma, we provide bounds on difference in total variation norm between the proposal distributions of two nearby points. Lemma 4 There exists a continuous non-decreasing function h : [0, 1/30] R+ with h(1/30) 10 5, such that for any ϵ (0, 1/30], the John walk with r [0, h(ϵ)] satisfies PJ x PJ y TV ϵ, for all x, y int (K) such that x y Jx ϵr 2κ2d3/4 , and TJohn(r)(δx) PJ x TV 5ϵ, for all x int (K). (76b) See Section D.3 for its proof. With these lemmas in hand, we are now ready to prove Theorem 2. D.2 Proof of Theorem 2 The proof is similar to the proof of Theorem 1, and relies on the Lov asz s Lemma. Here onwards, we use the following simplified notation Tx = TJohn(r)(δx), Px = PJ x and x = Jx . In order to invoke Lov asz s Lemma, we need to show that for any two points x, y int (K) with small cross-ratio d K(x, y), the TV-distance Tx Ty TV is also small. We proceed with the proof in two steps: (A) first, we relate the cross-ratio d K(x, y) to the John local norm of x y at x, and (B) we then use Lemma 4 to show that if x, y int (K) are close in the John local-norm, then the transition kernels Tx and Ty are close in TV-distance. Step (A): We claim that for all x, y int (K), the cross-ratio can be lower bounded as d K(x, y) 1 p 3d/2 x y x . (77) From the arguments in the proof of Theorem 1 (proof for the Vaidya Walk), we have d K(x, y) max i [n] Chen, Dwivedi, Wainwright and Yu Using the fact that maximum of a set of non-negative numbers is greater than the weighted mean of the numbers and Lemma 3, we find that v u u t 1 Pn i=1 ζx,i i=1 ζx,i (a i (x y))2 s2 x,i = x y x p thereby proving the claim (77). Step (B): By the triangle inequality, we have Tx Ty TV Tx Px TV + Px Py TV + Py Ty TV. Using Lemma 4, we obtain that Tx Ty TV 11ϵ, x, y int (K) such that x y x ϵr 2κ2d3/4 . Consequently, the John walk satisfies the assumptions of Lov asz s Lemma with 3d/2 ϵr 2κ2d3/4 and ρ := 1 11ϵ. Plugging in ϵ = 1/30, r = 10 5, we obtain the claimed upper bound of O κ4d5/2 on the mixing time of the random walk. D.3 Proof of Lemma 4 We prove the lemma for the following function, 2 log(4/ϵ) , ϵ 2 32χ1,ϵ , r ϵ 386 24χ2,ϵ , ϵ 5 1680χ4,ϵ , s ϵ 40 χ2,ϵχ6,ϵ 15120 1/2 , r ϵ 204800χ2,ϵ 24 log(32/ϵ) where χ1,ϵ = log(2/ϵ)and χk,ϵ = (2e/k log (16/ϵ))k/2 for k = 2, 3, 4 and 6. A numerical calculation shows that h(1/30) 10 5. We now prove the two parts (76a) (76b) of the Lemma separately. D.3.1 Proof of claim (76a) Applying Pinsker s inequality, and plugging in the closed formed expression for the KL divergence between two Gaussian distributions we find that Px Py 2 TV 2 KL(Py Px) = trace(J 1/2 x Jy J 1/2 x ) d log det(J 1/2 x Jy J 1/2 x ) + κ4d3/2 λi 1 + log 1 r2 x y 2 x , (79) where λ1, . . . , λd > 0 denote the eigenvalues of the matrix J 1/2 x Jy J 1/2 x . To bound the expression (79), we make use of the following lemma: Fast MCMC Sampling Algorithms on Polytopes Lemma 5 For any scalar t [0, 1/64] and pair of points x, y int (K) such that x y x t/κ2, we have 1 48t + 4t2 Id J 1/2 x Jy J 1/2 x 1 + 48t + 4t2 , where denotes ordering in the PSD cone and Id denotes the d-dimensional identity matrix. See Section F for the proof of this lemma. For ϵ (0, 1/30] and r = 10 5, we have t = ϵr/(2d3/4) 1/64, whence the eigenvalues {λi, i [d]} can be sandwiched as d3/4 + ϵ2r2 d3/2 λi 1 + 24ϵr d3/4 + ϵ2r2 d3/2 for all i d. (80) We are now ready to bound the TV distance between Px and Py. Using the bound (79) and the inequality log ω ω 1, valid for ω > 0, we obtain r2 x y 2 x . Using the assumption that x y x ϵr/ 2κ2d3/4 , and plugging in the bounds (80) for the eigenvalues {λi, i [d]}, we find that r2 x y 2 x 2000ϵ2r2 In asserting this inequality, we have used the facts that 1 1 24ω + ω2 1 + 24ω + 1000ω2, and 1 1 + 24ω + ω2 1 24ω + 1000ω2 for all ω 0, 1 100 . Note that for any r [0, 1/100], we have that 2000r2/ d 1/2. Putting the pieces together yields Px Py TV ϵ, as claimed. D.3.2 Proof of claim (76b) 2Px(Kc) | {z } =: S1 min 1, pz(x) | {z } =: S2 where Kc denotes the complement of K. We now show that S1 ϵ and S2 4ϵ, from which the claim follows. Bounding the term S1: Note that for z N(x, r2 κ2d3/2 J 1 x ), we can write z d= x + r κd3/4 J 1/2 x ξ, (82) Chen, Dwivedi, Wainwright and Yu where ξ N (0, Id) and d= denotes equality in distribution. Using equation (82) and definition (74) of θx,i, we obtain the bound a i (z x) 2 s2 x,i = r2 " a i J 1/2 x ξ sx,i κ2d3/2 θx,i ξ 2 2 d ξ 2 2 , (83) where step (i) follows from Cauchy-Schwarz inequality, and step (ii) from part (d) of Lemma 3. Define the events d ξ 2 2 < 1 and E := {z int (K)} . Inequality (83) implies that E E and hence P [E ] P [E]. Using a standard Gaussian tail bound and noting that r 1/2 1+ 2/d log(2/ϵ), we obtain P [E] 1 ϵ/2 and whence P [E ] 1 ϵ/2. Thus, we have shown that P [z / K] ϵ/2 which implies that S1 ϵ. Bounding the term S2: By Markov s inequality, we have min 1, pz(x) αP [pz(x) αpx(z)] for all α (0, 1]. (84) By definition (71) of px, we obtain pz(x) px(z) = exp z x 2 z z x 2 x + 1 2 (log det Jz log det Jx) The following lemma provides us with useful bounds on the two terms in this expression, valid for any x int (K). Lemma 6 For any ϵ (0, 1 4] and r (0, h(ϵ)], we have 2 log det Jz 1 2 log det Jx ϵ 1 ϵ, and (85a) z x 2 z z x 2 x 2ϵ r2 We provide the of this lemma in Section G. Using Lemma 6, we now complete the proof of the Theorem 2. For r h(ϵ), we obtain pz(x) px(z) exp ( 2ϵ) 1 2ϵ with probability at least 1 2ϵ. Substituting α = 1 2ϵ in inequality (84) yields that S2 4ϵ, as claimed. Fast MCMC Sampling Algorithms on Polytopes Appendix E. Technical Lemmas for the John walk We begin by summarizing a few key properties of various terms involved in our analysis. Let Σx,w be an n n diagonal matrix defined as Σx,w = diag (σx,w,i, . . . , σx,w,n) where σx,ζx,w,i = (Υx,w)ii, i [n]. (86a) Let Υ(2) x,w denote the hadamard product of Υx,w with itself. Further define Λx,w := Σx,w Υ(2) x,w. (86b) Lee and Sidford (2014) proved that the weight vector ζx is the unique solution of the following fixed point equation: wi = σx,w,i + βJ, i [n]. (87a) To simplify notation, we use the following shorthands: σx = σx,ζx, Υx = Υx,ζx, Υ(2) x = Υ(2) x,ζx, Σx = Σx,ζx, Λx = Λx,ζx. (87b) Thus, we have the following relation: ζx = σx,ζx + βJ1 = σx + βJ1. (87c) E.1 Deterministic expressions and bounds We now collect some properties of various terms defined above. Lemma 7 For any x int (K), the following properties hold: (a) σx,i = Pn j=1 σ2 x,i,j = Pn j,k=1 σx,i,jσx,j,kσx,k,i for each i [n], (b) Σx Υ(2) x , (c) Pn i=1 ζx,iθx,i = d, (d) θx,i = Pn j=1 ζx,iθ2 x,i,j, for each i [n], (e) θ x Σxθx = Pn i=1 θ2 x,iζx,i 4d, and (f) βJ 2Fx Jx (1 + βJ) 2Fx. The proof is based on the ideas similar to Lemma 5 in the proof of the Vaidya walk and is thereby omitted. The next lemma relates the change in slackness sx,i = bi a i x to the John-local norm at x. Lemma 8 For all x, y int (K), we have Chen, Dwivedi, Wainwright and Yu Proof For any pair x, y int (K) and index i [n], we have a i (x y) 2 (i) J 1 2 x ai 2 2 J 1 2x (x y) 2 2 = θx,is2 x,i x y 2 x (ii) 4s2 x,i x y 2 x , where step (i) follows from the Cauchy-Schwarz inequality, and step (ii) uses the bound θx,i from Lemma 3(d). Noting the fact that a i (x y) = sy,i sx,i, the claim follows after simple algebra. We now state various expressions and bounds for the first and second order derivatives of the different terms. To lighten notation, we introduce some shorthand notation. For any y int (K) and h Rd, define the following terms: dy,i = a i h sy,i , i [n] Dy = diag(dy,1, . . . , dy,n), (88a) fy,i = ζ y,ih ζy,i , i [n] Fy = diag(fy,1, . . . , fy,n), (88b) 2h 2ζy,ih/ζy,i, i [n] Ly = diag(ℓy,1, . . . , ℓy,n), (88c) ρy := (Gy αΛy) ℓy,1 ... ℓy,n where for brevity in our notation we have omitted the dependence on h. The choice of h is specified as per the context. Further, we define for each x int (K) and i [n] ϕx,i := ζx,i s2 x,i , and Ψx := 1 2 log det Jx, (89) ˆax,i := J 1/2 x ax,i s2 x,i , and ˆbx,i := J 1/2 x AxΛx (Gx αΛx) 1 ei. (90) Next, we state expressions for gradients of ζ, ϕ and Ψ and bounds for directional Hessian of σ, ϕ and Ψ which are used in various Taylor series expansions and bounds in our proof. Lemma 9 (Calculus) For any y int (K) and h Rn, the following relations hold; (a) Gradient of ζ: (fy,1, . . . , fy,n) = 2 (Gy αΛy) 1 Λy Ayh; (b) Hessian of ζ: ρy 1 56κ2 n X i=1 ζy,id2 y,i. (91) (c) Gradient of Ψ: Ψ h = θ y Gy In + (Gy αΛy) 1 Λy Ayh. (d) Gradient of ϕ: ϕ y,ih = ϕy,i (2dy,i + fy,i). Fast MCMC Sampling Algorithms on Polytopes (e) Bound on 2Ψ: 1 2 h ( 2Ψ)h 1 2 h Pn i=1 ζy,i θy,i h 9 d2 y,i + 4f2 y,i i + |Pn i=1 ζy,i θy,iℓy,i| i (f) Bound on 2ϕ: i=1 d2 y,is2 y,i 1 2h 2ϕy,ih i=1 ζy,id4 y,i + 2 i=1 ζy,id3 y,ify,i i=1 ζy,id2 y,iℓy,i The proof is provided in Section H.1. Next, we state some results that would be useful to provide explicit bounds for various terms like fy, ℓy and ρy that appear in the statements of the previous lemma. Note that the following results do not have a corresponding analog in our analysis of the Vaidya walk. Lemma 10 For any c1, c2 0, y int (K), we have c1In + c2Λy (Gy αΛy) 1 Gy c1In + c2 (Gy αΛy) 1 Λy (c1 + c2)2 κ2Gy, where denotes the ordering in the PSD cone. Lemma 11 Let µy denote the n n matrix (Gy αΛy) 1 Gy, and let µy,i,j denote its ij-th entry. Then for each i [n] and y int (K), we have µy,i,i [0, κ], and, (92a) X µ2 y,i,j ζy,j κ3. (92b) Corollary 12 Let ei Rn denote the unit vector along i-th axis. Then for any y int (K), we have Gy (Gy αΛy) 1 ei 1 3 dκ3/2, for all i [n]. (93) Consequently, we also have ||| (Gy αΛy) 1 Gy||| 3 See Section H.2, H.3 and H.4 for the proofs of Lemma 10, Lemma 11 and Corollary 12 respectively. E.2 Tail Bounds We now collect lemmas that provide us with useful tail bounds. We start with a result that shows that for a random variable z Px, the slackness sz,i is close to sx,i with high probability and consequently the weights ζz,i are also close to ζx,i. This result comes in handy for transferring the remainder terms in Taylor expansions to the reference point (around which the series is being expanded). Lemma 13 For any point x int (K) and r 1 25 2 log(4/ϵ), we have i [n], v xz, sx,i sv,i [0.99, 1.01] and ζx,i ζv,i [0.96, 1.04] 1 ϵ/4 (94a) Chen, Dwivedi, Wainwright and Yu See Section I.1 for the proof of this lemma. Next, we state high probability results for some Gaussian polynomials. These results are useful to bound various polynomials of the form Pn i=1 ζx,idk x,i, where dx,i = a i (z x)/sx,i and z is drawn from the transition distribution for the John walk at point x. Lemma 14 (Gaussian moment bounds) To simplify notations, all subscripts on x are omitted in the following statements. For any ϵ (0, 1/30], define χk := χk,ϵ = (2e/k log (16/ϵ))k/2, for k = 2, 3, 4 and 6, then we have i=1 ζi ˆa i ξ 2 χ2 i=1 ζi ˆa i ξ 3 χ3 i=1 ζi ˆa i ξ 2 ˆb i ξ χ3 i=1 ζi ˆa i ξ 4 χ4 i=1 ζi ˆa i ξ 6 χ6 See Section I.2 for the proof. Appendix F. Proof of Lemma 5 As a direct consequence of Lemma 8, for any x, y int (K) such that x y x t/κ2, we have Bounding the terms in 2Fx one by one, we obtain 2 2Fy 2Fx 1 + 2t We claim that log ζy log ζx 16t. (97) Assuming the claim as given at the moment, we now complete the proof. Putting the result (97) in matrix form, we obtain that exp ( 16t) In G 1 x Gy exp (16t) In, and hence exp ( 16t) ζx,i ζy,i exp (16t) ζx,i. (98) Fast MCMC Sampling Algorithms on Polytopes Consequently, using the definition of Jx we have, 1 2t 2 exp ( 16t) | {z } ωℓ Jx Jy 1 + 2t 2 exp (16t) | {z } ωu Letting ω = 2t, we obtain ωℓ (1 ω)2 exp ( 8ω) (i) 1 24ω + ω2, and ωu (1 + ω)2 exp (8ω) (ii) 1 + 24ω + ω2, where inequalities (i) and (ii) hold since ω 1/24. Putting the pieces together, we find that 1 48t + 4t2 Jx Jy 1 48t + 4t2 Jx for t [0, 1/48]. Now, we return to the proof of our earlier claim (97). We use an argument based on the continuity of the function x 7 log ζx. (Such an argument appeared in a similar scenario in Lee and Sidford (2014).) For λ [0, 1], define uλ = λy + (1 λ) x. Let λmax := sup λ [0, 1] log ζuλ log ζx 16t . (99) It suffices to establish that λmax = 1. Note that λ = 0 is feasible on the RHS of equation (99) and hence λmax exists. Now for any λ [0, λmax] and i {1, . . . , n}, there exists v on the segment uλx such that |log ζuλ,i log ζx,i| = (i) G 1 v G v (y x) = 2 (Gv αΛv) 1 Λv Av (y x) . where in step (i) we have used the fact that uλ x = λ(y x) and λ [0, 1]. We claim that (Gv αΛv) 1 Λvv1 κ v1 + 2κ2 G1/2 v v1 2 for any v1 Rn. (100) We prove the claim at the end of this section. We now derive bounds for the two terms on the RHS of the equation (100) for v1 = Av(y x). Note that Av (y x) = max i (i) 2t κ2 (1 2t/κ2) Inequality (i) uses bound (96) and inequality (ii) follows by plugging in t 1/64. Next, we have G1/2 v Av (y x) 2 a i (y x) 2 s2 v,i s2 x,i (i) x y 2 x max i [n] ζv,i ζx,i s2 v,i s2 x,i κ4 1 + (16t) + (16t)2 1 + 2t Chen, Dwivedi, Wainwright and Yu where step (i) follows from the definition of the local norm; step (ii) follows from bounds (96) and (99) and the fact that ex 1 + x + x2 for all x [0, 1/4]; and inequality (iii) follows by plugging in t 1/64. Putting the pieces together, we obtain log ζuλ log ζx 2(κ 3t/κ2 + 2κ2 1.5t/κ4) 12t < 16t. The strict inequality is valid for λ = λmax. Consequently, using the continuity of x 7 log ζx, we conclude that λmax = 1. It is left to prove claim (100). Let w := (Gv αΛv) 1 Λvv1. which implies (Gv αΛv) w = Λvv1. Plugging the expression of Gv and Λv, we have (1 α)Σv + βJIn + αΥ(2) v w = Σv Υ(2) v v1. Writing component wise, we find that for any i [n], we have |((1 α)σv,i + βJ) wi| α e i Υ(2) v w + σv,i |v1,i| + e i Υ(2) v v1 (i) ασv,i Σ1/2 v w 2 + σv,i v1 + σv,i Σ1/2 v v1 2 (ii) ασv,i G1/2 v w 2 + σv,i v1 + σv,i G1/2 v v1 2 (iii) ασv,iκ G1/2 v v1 2 + σv,i v1 + σv,i G1/2 v v1 2 , (101) where inequality (ii) from the fact that Σy Gy and inequality (iii) from Lemma 10 with c1 = 0, c2 = 1. To assert inequality (i), observe the following j=1 σ2 y,i,jwj j=1 σ2 y,i,j |wj| (a) σy,i j=1 σy,j |wj| (b) σy,i σy,j |wj| = σy,i Σ1/2 v w 2 , where step (a) follows from the fact that σ2 y,i,j σy,iσy,j, and step (b) from the fac that σy,i [0, 1]. Dividing both sides of inequality (101) by ((1 α)σv,i + βJ) and observing that σv,i/ ((1 α)σv,i + βJ) κ, and α [0, 1], yields the claim. Appendix G. Proof of Lemma 6 We prove Lemma 6 in two parts: claim (85a) in Section G.1 and claim (85b) in Section G.2. G.1 Proof of claim (85a) Using the second order Taylor expansion, we have Ψz Ψx = (z x) Ψx + 1 2 (z x) 2Ψy (z x) , for some y xz. We claim that for r h(ϵ), we have P h (z x) Ψx ϵ/2 i 1 ϵ/2, and (102a) 2 (z x) 2Ψy (z x) ϵ/2 1 ϵ/2. (102b) Note that the claim (85a) follows from the above two claims. Fast MCMC Sampling Algorithms on Polytopes G.1.1 Proof of bound (102a) We observe that (z x) Ψx N 0, r2 κ2n Ψ x J 1 x Ψx Let Ex = In + (Gx αΛx) 1 Λx. Substituting the expression of Ψx from Lemma 9 (c) and applying Cauchy-Schwarz inequality, we have that for any vector v Rd v Ψx Ψ x v = (θ x Gx Ex Axv)2 v A x Gx Axv θ x Gx Ex G 1 x Ex Gxθx . (103) Observe that G1/2 x Ex G 1/2 x = In + (In αG 1/2 x Λx G 1/2 x ) 1(G 1/2 x Λx G 1/2 x ). Now, using the intermediate bound (126) from the proof of Lemma 10, we obtain that In G1/2 x Ex G 1/2 x 2κIn, and hence Gx Gx Ex G 1 x Ex Gx 4κ2Gx. Consequently, we have θ x Gx Ex G 1 x Ex Gxθx 4κ2θ x Gxθx = 4κ2 n X i=1 ζx,iθ2 x,i 16κ2d, where the last step follows from Lemma 7. Putting the pieces together into equation (103), we obtain Ψx Ψ x 16κ2d Jx whence J 1/2 x Ψx Ψ x J 1/2 x 16κ2d Id. Noting that the matrix J 1/2 x Ψx Ψ x J 1/2 x has rank one, we have Ψ x J 1 x Ψx = trace J 1/2 x Ψx Ψ x J 1/2 x 16κ2d. Using standard Gaussian tail bound, we have P (z x) Ψx 32χ1r 1 exp χ2 1 . Choosing χ1 = log(2/ϵ), and observing that 32χ1 , (104) yields the claim. G.1.2 Proof of bound (102b) In the following proof, we use h = z x for definitions (88a)-(88d). According to Lemma 9(e), we have 1 2 (z x) 2Ψy (z x) i=1 ζy,i θy,i 2 d2 y,i + 2f2 y,i i=1 ζy,i θy,iℓy,i Chen, Dwivedi, Wainwright and Yu We claim that i=1 ζy,i θy,i 2 d2 y,i + 2f2 y,i i=1 ζy,i θy,iℓy,i i=1 ζy,id2 y,i. (105) Assuming the claim as given at the moment, we now complete the proof. Note that y is some particular point on xz and its dependence on z is hard to characterize. Consequently, we transfer all the terms with dependence on y, to terms with dependence on x only. We have i=1 ζy,id2 y,i = i=1 ζx,id2 x,i ζy,i ζx,i s2 x,i s2 y,i | {z } τy,i We now invoke the following high probability bounds implied by Lemma 13 and Lemma 14 (95a) respectively sup y xz,i [n] τy,i 1.1 1 ϵ/4, and, P i=1 ζx,i ˆa x,iξ 2 χ2 Since h = z x, we have that d2 x,i = r2 κ2d3/2 ˆa x,iξ 2 . Consequently, for 24χ2 , (107) with probability at least 1 ϵ/2, we have 1 2 (z x) 2Ψy (z x) eqn. (105) 386 i=1 ζy,id2 y,i hpb (106) ϵ, which completes the proof. We now turn to the proof of claim (105). First we observe the following relationship between the terms dy,i and fy,i: i=1 ζy,if2 y,i (i) = 4h A yΛy (Gy αΛy) 1Gy (Gy αΛy) 1Λy Ayh (ii) 4κ2h A y Gy Ayh=4κ2 n X i=1 ζy,id2 y,i, where step (i) follows by plugging in the definition of fy,i (88b) and step (ii) by invoking Lemma 10 with c1 = 0 and c2 = 1. Next, we relate the term on the LHS of equation (105) involving ℓy,i to a polynomial in dy,i. Using Lemma 9, we find that i=1 ζy,i θy,iℓy,i = (Gy αΛy) 1 Gyθy (Gy αΛy) ℓy (Gy αΛy) 1 Gyθy | {z } v1 (Gy αΛy) ℓy | {z } ρy Fast MCMC Sampling Algorithms on Polytopes where the last step follows from the Holder s inequality: for any two vectors u, v Rd, we have that u v u v 1. Substituting the bound for the norm v1 from Corollary 12 and the bound on ρy,i from Lemma 9(b), we obtain that i=1 ζy,i θy,iℓy,i 12 nκ3/2 n X 7ζy,id2 y,i+3ζy,if2 y,i+ 13d2 y,j+6f2 y,j Υ2 y,i,j 672 nκ4 n X i=1 ζy,id2 y,i, where the last step follows from Lemma 7(a) and the bound (108). The claim now follows. G.2 Proof of claim (85b) Writing z = x + tu, where t is a scalar and u is a unit vector in Rd, we obtain z x 2 z z x 2 x = t2 n X a i u 2 (ϕz,i ϕx,i) . Now, we use a Taylor series expansion for Pn i=1 a i u 2 (ϕz,i ϕx,i) around the point x, along the line u. There exists a point y xz such that a i u 2 (ϕz,i ϕx,i) = a i u 2 (z x) ϕx,i + 1 2 (z x) 2ϕy,i (z x) . Note that the point y in this discussion is not the same as the point y used in previous proofs, in particular in Section G.1. Multiplying both sides by t2, and using the shorthand dx,i = a i (z x) sx,i , we obtain z x 2 z z x 2 x = i=1 d2 x,is2 x,i (z x) ϕx,i + i=1 d2 x,is2 x,i 1 2 (z x) 2ϕy,i (z x) . (109) We claim that for r h(ϵ), we have i=1 d2 x,is2 x,i (z x) ϕx,i ϵ r2 1 ϵ/2, and (110a) i=1 d2 x,is2 x,i 1 2 (z x) 2ϕy,i (z x) 1 ϵ/2. (110b) We now prove each claim separately. G.2.1 Proof of bound (110a) Using Lemma 9(d) and using h = z x where z is given by the relation (82), we find that i=1 d2 x,is2 x,i (z x) ϕx,i = i=1 ζx,id2 x,i (2dx,i + fx,i) i=1 ζx,i ˆa x,iξ 3 + 2r3 i=1 ζx,i ˆa x,iξ 2 ˆb x,iξ (111) Chen, Dwivedi, Wainwright and Yu Using high probability bounds for the two terms in equation (111) from Lemma 14, part (95b) and part (95c), we obtain that i=1 d2 x,is2 x,i (z x) ϕx,i κ5d7/4 ϵ r2 with probability at least 1 ϵ/2. The last inequality uses the condition that 60χ3 . (112) The claim now follows. G.2.2 Proof of bound (110b) Note that dx,isx,i = a i h = dy,isy,i for any h. Using this equality for h = z x, we find that i=1 d2 x,is2 x,i 1 2h 2ϕy,ih i=1 d2 y,is2 y,i 1 2h 2ϕy,ih i=1 ζy,id4 y,i | {z } C1 i=1 ζy,id3 y,ify,i i=1 ζy,id2 y,iℓy,i where step (i) follows from Lemma 9(f). We can write C1 as follows i=1 ζy,id4 y,i = i=1 ζx,id4 x,i ζy,i ζx,i d4 y,i d4 x,i = r4 i=1 ζx,i ˆa x,iξ 4 ζy,i d4 y,i d4 x,i . (114) Now, we claim the following: i=1 ζx,i ˆa x,iξ 2 ζy,i d2 y,i d2 x,i i=1 ζx,i ˆa x,iξ 6 ζy,i d6 y,i d6 x,i , and, (115a) i=1 ζx,i ˆa x,iξ 2 ζy,i d2 y,i d2 x,i ˆa x,iξ 2 d2 y,i d2 x,i + i=1 ζx,i ˆa x,iξ 4 ζy,i d4 y,i d4 x,i Assuming the claims as given, we now complete the proof. Using Lemma 13, we have " ζy,i ζx,i d6 y,i d6 x,i 1.2 and consequently 3C1+2C2+C3 r4 i=1 ζx,i(ˆa x,iξ)4 + 10 n X i=1 ζx,i(ˆa x,iξ)2 i=1 ζx,i(ˆa x,iξ)6 1/2 i=1 ζx,i ˆa x,iξ 2 max i (ˆa x,iξ)2 + n X i=1 ζx,i(ˆa x,iξ)4 1/2 # Fast MCMC Sampling Algorithms on Polytopes with probability at least 1 ϵ/4. Now, we observe that for all i [n] and x int (K), we have ˆa x,iξ N(0, θx,i) and θx,i 4. Invoking the standard tail bound for maximum of Gaussian random variables, we obtain ˆa x,iξ 8 p log(32/ϵ) 1 ϵ/16. Using the fact that 2c1c2 c1 + c2 for all c1, c2 1, we obtain ˆa x,iξ 16 p log(32/ϵ) 1 ϵ/16. Combining this bound with the tail bounds for various Gaussian polynomials (95a), (95d), (95e) from Lemma 14, and substituting in inequality (116), we obtain that i=1 d2 x,is2 x,i 1 2h 2ϕy,ih 1680d + 10 χ2 24d 256 log n log(32/ϵ) + χ4 1680d 1/2 # with probability at least 1 ϵ/2. In the above expression, the terms χi are a function of ϵ as defined in Lemma 14. In particular, χi = χi,ϵ = (2e/i log(16/ϵ))i/2 for i {2, 3, 4, 6}. Observing that 256 log(32/ϵ) χ4 1680 1/2, and that our choice of r satisfies 15120 1/2 , ϵ 204800χ2 24 log(32/ϵ) i=1 d2 x,is2 x,i 1 2h 2ϕy,ih Asserting the additional condition d log n, yields the claim. It is now left to prove the bounds (115a) and (115b). We prove these bounds separately. Bounding C2: Applying Cauchy-Schwarz inequality, we have i=1 ζy,id3 y,ify,i i=1 ζy,if2 y,i i=1 ζy,id6 y,i Using the bound (108), we obtain i=1 ζy,if2 y,i 4κ2 n X i=1 ζy,id2 y,i = 4κ2 n X i=1 ζx,id2 x,i ζy,i ζx,i d2 y,i d2 x,i . Chen, Dwivedi, Wainwright and Yu Substituting h = z x where z is given by relation (82), we obtain that dx,i = r d3/4κˆa x,iξ, and thereby i=1 ζy,if2 y,i 4κ2 r2 i=1 ζx,i(ˆa x,iξ)2 ζy,i d2 y,i d2 x,i . Doing similar algebra, we obtain Pn i=1 ζy,id6 y,i = r6 d9/2κ12 Pn i=1 ζx,i ˆa x,iξ 6 ζy,i d6 y,i d6 x,i . Putting the pieces together yields the claim. Bounding C3: Recall that ρy = (Gy αΛy)ℓy (Lemma 9) and µy = (Gy αΛy) 1 Gy (Lemma 11). We have i=1 ζy,id2 y,iℓy,i = 1D2 y Gyℓy = 1D2 y Gy(Gy αΛy) 1 | {z } =:v y (Gy αΛy)ℓy | {z } ρy Using the definition of vy and µy, we obtain vy,i := e i vy = e i (Gy αΛy) 1 Gy D2 y1 = e i µy D2 y1 = µy,i,id2 y,i + X j [n],j =i µy,i,jd2 y,j. Consequently, we have i=1 vy,iρy,i =:C4 z }| { n X i=1 |ρy,i| µy,i,id2 y,i + =:C5 z }| { µy,i,jd2 y,j From Lemma 11, we have that µy,i,i [0, κ]. Hence, we have C4 ρy 1 κ maxi [n] d2 y,i. To bound C5, we note that µy,i,jd2 y,j (i) µ2 y,i,j ζy,j j=1 ζy,jd4 y,j j=1 ζx,jd4 x,j ζy,j ζx,j d4 y,j d4 x,j where step (i) follows from Cauchy-Schwarz inequality and step (ii) from Lemma 11. Putting the pieces together, we obtain that i=1 ζy,id2 y,iℓy,i κ max i [n] d2 y,i + κ3/2 j=1 ζx,jd4 x,j ζy,j ζx,j d4 y,j d4 x,j Using the bound on ρy 1 from Lemma 9, we have i=1 ζy,id2 y,iℓy,i i=1 ζy,id2 y,i κ max i [n] d2 y,i + κ3/2 j=1 ζx,jd4 x,j ζy,j ζx,j d4 y,j d4 x,j Substituting the expression for dx,i = r κ2d3/4 ˆa x,iξ yields the claim. Fast MCMC Sampling Algorithms on Polytopes Appendix H. Proofs of Lemmas from Section E.1 In this section we collect proofs of lemmas from Section E.1. Each lemma is proved in a different subsection. H.1 Proof of Lemma 9 Up to second order terms, we have 1 s2 x+h,i = 1 s2 x,i 1 + 2a i h sx,i + 3(a i h)2 + O h 3 2 , (118a) ζy+h,i = ζy,i + h ζy,i + 1 2h 2ζy,ih + O h 3 2 , (118b) ζα y+h,i = ζα y,i + αζα 1 y,i 2h 2ζy,ih + α (α 1) 2 ζα 2 y,i h ζy,i 2 + O h 3 2 , Further, let Jy := A y Gα y Ay = i=1 ζα y,i aia i s2 y,i . (118d) Using equations (118a) and (118c), and substituting dy,i = a i h/sy,i, fy,i = h ζy,i/ζy,i and ℓy,i = 1 2h 2ζy,ih/ζy,i, we find that 1 + αfy,i + αℓy,i + α (α 1) 1 + 2dy,i + 3d2 y,i ζα y,i aia i s2 y,i + O h 3 2 . Note that dy,i and fy,i are first order terms in h 2 and ℓy,i is a second order term in h 2. Thus we obtain i=1 (2dy,i + αfy,i) ζα y,i aia i s2 y,i | {z } 3d2 y,i + 2αdy,ify,i + αℓy,i + α(α 1) ζα y,i aia i s2 y,i | {z } Let y,h := (1) y,h + (2) y,h. Note that (i) y,h denotes the i-th order term in h 2. Finally, the following expansion also comes in handy for our derivations: a T i J 1 y+hai = a i J 1 y ai a i J 1 y y,h J 1 y ai + a i J 1 y y,h J 1 y y,h J 1 y ai + O h 3 2 . (118e) H.1.1 Proof of part (a): Gradient of weights The expression for the gradient ζy,i is derived in Lemma 14 of the paper (Lee and Sidford, 2014) and is thereby omitted. Chen, Dwivedi, Wainwright and Yu H.1.2 Proof of part (b): Hessian of weights We claim that ρy = I αΛy G 1 y 1 2h 2ζy,1h 1 2h 2ζy,mh = (2Dy + αFy)Υ(2) y (2Dy + αFy)1 + Σy Υ(2) y 2αDy Fy + 3D2 y + ταF 2 y 1 + diag (Υy(2Dy + αFy)Υy(2Dy + αFy)Υy) , (119) where we have used diag(B) to denote the diagonal vector (B1,1, . . . , Bn,n) of the matrix B. Deferring the proof of this expression for the moment, we now derive a bound on the ℓ1 norm of ρy. Expanding the i-th term of ρy,i from equation (119), we obtain ρy,i = (2dy,i + αfy,i) j=1 (2dy,j + αfy,j)Υ2 y,i,j + 2αdy,ify,i + 3d2 y,i + ταf2 y,i σy,i 2αdy,jfy,j + 3d2 y,j + ταf2 y,j Υ2 y,i,j + j,l=1 (2dy,j + αfy,j)(2dy,l + αfy,l)Υy,i,jΥy,j,lΥy,l,i. Recall that α = 1 1/ log2(2n/d). Since n d for polytopes, we have α [0, 1] and consequently |τα| = |α(α 1)/2| [0, 1]. Further note that Υx is an orthogonal projection matrix, and hence we have diag(Υxei)Υx diag(Υxei) diag(Υxei) diag(Υxei). Combining these observations with the AM-GM inequality, we have |ρy,i| 7σy,id2 y,i + 3σy,if2 y,i + 13d2 y,j + 6f2 y,j Υ2 y,i,j. Summing both sides over the index i, we find that i=1 |ρy,i| (i) i=1 20σy,id2 y,i + 9σy,if2 y,i i=1 20ζy,id2 y,i + 9ζy,if2 y,i (iii) 56κ2 n X i=1 ζy,id2 y,i, where step (i) follows from Lemma 7 (a), step (ii) from Lemma 3 (a) and step (iii) from the bound (108). We now return to the proof of expression (119). Using equation (87c), we find that 1 2h 2ζy,ih = 1 2h 2σy,ih for all i [n]. (120) Next, we derive the Taylor series expansion of σy,i. Using the definition of Jx (118d) in equation (72), we find that σy,i = ζα y,i a i J 1 y ai s2 y,i . To compute the difference σy+h,i σy,i, we Fast MCMC Sampling Algorithms on Polytopes use the expansions (118a), (118c) and (118e). Letting τα = α(α 1)/2, we have σy+h,i = ζα y+h,i a i J 1 y+hai s2 y+h,i = ζα y,i a i J 1 y+hai s2 y,i 1 + αfy,i + αℓy,i + ταf2 y,i 1 + 2dy,i + 3d2 y,i + O h 3 2 = σy,i + (2dy,i + αfy,i)σy,i j=1 (2dy,j + αfy,j)Υ2 y,i,j + (2dy,i + αfy,i) j=1 (2dy,j + αfy,j)Υ2 y,i,j + 2αdy,ify,iσy,i + αℓy,i + ταf2 y,i + 3d2 y,i σy,i 3d2 y,j + 2αdy,jfy,j + αℓy,j + ταf2 y,j Υ2 y,i,j j,l=1 (2dy,j + αfy,j)(2dy,l + αfy,l)Υy,i,jΥy,j,lΥy,l,i + O h 3 2 . We identify the second order (in O h 2 2 ) terms in the previous expression. Using the equation (120), these are indeed the terms that correspond to the terms 1 2h 2ζy,ih, i [n]. Substituting ℓy,i = 1 2h 2ζy,ih/ζy,i, we have 1 2h 2ζy,ih = (2dy,i + αfy,i) j=1 (2dy,j + αfy,j)Υ2 y,i,j + 2αdy,ify,iσy,i + α ζy,i + ταf2 y,i + 3d2 y,i 3d2 y,j + 2αdy,jfy,j + α ζy,j + ταf2 y,j j,l=1 (2dy,j + αfy,j)(2dy,l + αfy,l)Υy,i,jΥy,j,lΥy,l,i. Collecting the different terms and doing some algebra yields the result (119). H.1.3 Proof of part (c): Gradient of logdet For a unit vector h Rd, we have h log det Jy = lim δ 0 1 δ (log det Jy+δh log det Jy) = lim δ 0 1 δ (log det J 1/2 y Jy+δh J 1/2 y log det Id) Let ˆay,i := J 1/2 y,i ai/sy,i for each i [n]. Using the property log det B = trace log B, where log B denotes the logarithm of the matrix and that log det Id = 0, we obtain h log det Jy = lim δ 0 1 δ ζy+δh (1 δa i h/sy,i)ˆay,iˆa y,i Chen, Dwivedi, Wainwright and Yu where we have substituted sy+δh,i = sy,i δa i h. Keeping track of first order terms in δ, and noting that Pn i=1 ζy,iˆay,iˆa y,i = Id, we find that ζy+δh,i (1 δa i h/sy,i)ˆay,iˆa y,i = trace log ζy,i + δh ζy,i 1 + 2δa i h sy,i ˆay,iˆa y,i i=1 δ 2a i h sy,i + h ζy,i ˆay,iˆa y,i i=1 δ 2a i h sy,i + h ζy,i θy,i + O δ2 where in the last step we have used the fact that trace(ˆay,iˆa y,i) = ˆa y,iˆay,i = θy,i for each i [n]. Substituting the expression for ζy from part (a), and rearranging the terms yields the claimed expression in the limit δ 0. H.1.4 Proof of part (d): Gradient of ϕ Using the chain rule and the fact that sy,i = ai, yields the result. H.1.5 Proof of part (e) We claim that 1 2h 2Ψyh = 1 i=1 ζy,iθy,i(3d2 y,i + 2dy,ify,i + ℓy,i) 1 i,j=1 ζy,iζy,jθ2 y,i,j (2dy,i + fy,i) (2dy,j + fy,j) The desired bound on h 2Ψyh /2 now follows from an application of AM-GM inequality with Lemma 7(d). We now derive the claimed expression for the directional Hessian of the function Ψ. We have 1 2h 2 log det Jy h = lim δ 0 1 2δ2 (log det J 1/2 y Jy+δh J 1/2 y + log det J 1/2 y Jy δh J 1/2 y 2 log det Id) 2 lim δ 0 1 δ2 ζy+δh (1 δa i h/sy,i)ˆay,iˆa y,i + trace log ζy δh (1 + δa i h/sy,i)ˆay,iˆa y,i Expanding the first term in the above expression, we find that ζy+δh,i (1 δa i h/sy,i)ˆay,iˆa y,i = trace log ζy,i + δh ζy,i + δ2 1 + 2δa i h sy,i + 3δ2 (a i h)2 ˆay,iˆa y,i | {z } =:Id+B Fast MCMC Sampling Algorithms on Polytopes Substituting the shorthand notation from equations (88a), (88b) and (88c), we have i=1 ζy,i δ(2dy,i + fy,i) + δ2(3d2 y,i + 2dy,ify,i + ℓy,i) ˆay,iˆa y,i + O δ3 . Now we make use of the following facts (1) trace log(Id +B) = trace h B B2 2 + O B 3 i , (2) for each i, j [n], we have trace(ˆay,iˆa j ) = ˆa y,iˆaj = θy,i,j, and (3) for each i [n], we have θy,i,i = θy,i. Thus we obtain ζy+δh,i (1 δa i h/sy,i)ˆay,iˆa y,i i=1 ζy,iθy,i δ(2dy,i + fy,i) + δ2(3d2 y,i + 2dy,ify,i + ℓy,i) i,j=1 ζy,iζy,jθ2 y,i,jδ2(2dy,i + fy,i)(2dy,j + fy,j) + O δ3 . Similarly, we can obtain an expression for trace log Pn i=1 ζy δh (1+δa i h/sy,i)ˆay,iˆa y,i . Putting the pieces together, we obtain 1 2h 2 log det Jy h = i=1 ζy,iθy,i(3d2 y,i + 2dy,ify,i + ℓy,i) 1 i,j=1 ζy,iζy,jθ2 y,i,j(2dy,i + fy,i)(2dy,j + fy,j). H.1.6 Proof of part (f) We claim that 1 2h 2ϕy,ih = ϕy,i 2dy,ify,i + 3d2 y,i + ℓy,i . (122) The claim follows from a straightforward application of chain rule and substitution of the expressions for ζy,i and 2ζy,i in terms of the shorthand notation dy,i, fy,i and ℓy,i. Multiplying both sides of equation (122) with d2 y,is2 y,i and summing over index i, we find that i=1 d2 y,is2 y,i 1 2h ϕ2 y,ih = i=1 d2 y,is2 y,iϕy,i ℓy,i + 2dy,ify,i + 3d2 y,i = i=1 d2 y,iζy,i ℓy,i + 2dy,ify,i + 3d2 y,i i=1 d2 y,iζy,i ℓy,i + f2 y,i + 4d2 y,i , where in the last step we have used the AM-GM inequality. The claim follows. H.2 Proof of Lemma 10 We claim that 0 G 1/2 y c1In + c2Λy (Gy αΛy) 1 G1/2 y (c1 + c2) κIn. (123) Chen, Dwivedi, Wainwright and Yu The proof of the lemma is immediate from this claim, as for any PSD matrix H c In, we have H2 c2In. We now prove claim (123). Note that G 1/2 y Λy (Gy αΛy) 1 G1/2 y = G 1/2 y Λy G 1/2 y | {z } :=By (In αJG 1/2 y Λy G 1/2 y ) 1. (124) Note that the RHS is equal to the matrix By(In αJBy) 1 which is symmetric. Observe the following ordering of the matrices in the PSD cone Σy + βJIn = Gy Σy Λy = Σy Υ(2) y 0. For the last step we have used the fact that Σy Υ(2) y is a diagonally dominant matrix with non negative entries on the diagonal to conclude that it is a PSD matrix. Consequently, we have By = G 1/2 y Λy G 1/2 y In. (125) Further, recall that αJ = (1 1/κ) κ = (1 αJ) 1. As s result, we obtain 0 (In αJG 1/2 y Λy G 1/2 y ) 1 κIn. Multiplying both sides by B1/2 y and using the relation (125), we obtain 0 B1/2 y (In αJG 1/2 y Λy G 1/2 y ) 1B1/2 y κIn. (126) Using the fact that By commutes with (In By) 1, we obtain By(In αJBy) 1 κIn. Using observation (124) now completes the proof. H.3 Proof of Lemma 11 Without loss of generality, we can first prove the result for i = 1. Let ν := µ y e1 denote the first row of the matrix µy. Observe that e1 = (Gy αΛy) G 1 y ν = ν αΣy G 1 y ν + αΥ(2) y G 1 y ν (127) We now prove bounds (92a) and (92b) separately. Proof of bound (92a): Multiplying the equation (127) on the left by ν G 1 y , we obtain g 1 1 ν1 = ν G 1 y ν αν G 1 y Σy G 1 y ν + αν G 1 y Υ(2) y G 1 y ν ν G 1 y ν αν G 1 y Σy G 1 y ν (128) g 1 1 ασy,1/g2 1 ν2 1. Rearranging terms, we obtain 0 ν1 ζy,1 ζy,1 ασy,1 (i) κ, (129) where inequality (i) follows from the facts that ζy,j σy,j and (1 α) = κ. Fast MCMC Sampling Algorithms on Polytopes Proof of bound (92b): In our proof, we use the following improved lower bound for the term µy,1,1 = ν1. ν1 ζy,1 ζy,1 ασy,1 + ασ2 y,1 , (130) Deferring the proof of this claim at the moment, we now complete the proof. We begin by deriving a weighted ℓ2-norm bound for the vector ν = (ν2, . . . , νn) . Equation (128) implies 1 ν1 + ασy,1 j=2 ν2 j ζ 1 y,j αζ 2 y,j σy,j (i) (1 α) ν2 j ζy,j , where step (i) follows from the fact that ζy,i σy,i. Now, we upper bound the expression on the left hand side of the above inequality using the upper (129) and lower (130) bounds on ν1: 1 ν1 + ασy,1 ζ 1 y,1 ζy,1 ζy,1 ασy,1 ζy,1 ζy,1 ασy,1 + ασ2 y,1 (ζy,1 ασy,1) ζy,1 ασy,1 + ασ2 y,1 where in the last step we have used the facts that ζy,1 σy,1 and (1 α) 1 = κ. Putting the pieces together, we obtain Pn j=2 ν2 j ζ 1 y,j κ3, which is equivalent to our claim (92b) for i = 1. It remains to prove our earlier claim (130). Writing equation (127) separately for the first coordinate and for the rest of the coordinates, we obtain 1 = 1 ασy,1ζ 1 y,1 + ασ2 y,1,1ζ 1 y,j ν1 + α j=2 σ2 y,1,jζ 1 y,j νj, and (131a) 0 = In 1 αΣ y G 1 y + αΥ (2) y G 1 y + αζ 1 y,1ν1 σ2 y,1,2 ... σ2 y,1,n where G y (respectively Σ y, Υ (2) y ) denotes the principal minor of Gy (respectively Σy, Υ(2) y ) obtained by excluding the first column and the first row. Multiplying both sides of the equation (131b) from the left by ν2, , νn G 1 y , we obtain ν2 j | {z } cy,j +α ν2, , νn G 1 y Υ (2) y G 1 y | {z } Cy.2 σ2 y,j ζy,j νj. Chen, Dwivedi, Wainwright and Yu Observing that α [0, 1] and ζy,j σy,j for all y int (K) and j [n], we obtain cy,j 0. Further, note that G 1 y Υ (2) y G 1 y is a PSD matrix and hence we have that Cy,2 0. Putting the pieces together, we have σ2 y,j ζy,j νj 0. Combining this inequality with equation (131a) yields the claim. H.4 Proof of Corollary 12 Without loss of generality, we can prove the result for i = 1. Applying Cauchy-Schwarz inequality, we have j=2 |νj| ν1 + j=2 ζy,j κ + κ3/2 where to assert the last inequality we have used Lemma 11 and Lemma 3(c). The claim (93) follows. Further, noting that the infinity norm of a matrix is the ℓ1-norm of its transpose, we obtain ||| (Gy αΛy) 1 Gy||| 3 dκ3/2 as claimed. Appendix I. Proof of Lemmas from Section E.2 In this section, we collect proofs of auxiliary lemmas from Section E.2. I.1 Proof of Lemma 13 Using Lemma 8, and the relation (82) we have 1 sz,i κ4d3/2 ξ ξ, (133) where ξ N(0, Id). Define s := max i [n], v xz Using the standard Gaussian tail bound, we observe that Pξ N(0,In) ξ ξ d(1 + δ) 1 ϵ/4 for δ = q 2 d. Plugging this bound in the inequality (133) and noting that for all v xz we have v x Jx z x Jx, we obtain that s 2r2(1 + p 2/d log(4/ϵ) 2 log(4/ϵ)), (135) Fast MCMC Sampling Algorithms on Polytopes and noting that κ4 d 1 implies the claim (94a). Hence, we obtain that s < .005/κ2 and consequently maxi [n],v xz sx,i/sv,i (0.99, 1.01) with probability at least 1 ϵ/4. We now claim that max i [n],v xz ζx,i ζv,i 1 24κ2 s, 1 + 24κ2 s , if s 1 32κ2 . The result follows immediately from this claim. To prove the claim, note that equation (98) implies that if s 1 32κ2 , then ζv,i ζx,i (e 8κ2 s, e8κ2 s) for all i [n] and v xz, which implies that max i [n],v xz ζx,i ζv,i (e 8κ2 s, e8κ2 s). Asserting the facts that ex 1 + 3x and e x 1 3x, for all x [0, 1] yields the claim. I.2 Proof of Lemma 14 The proof once again makes use of the classical tail bounds for polynomials in Gaussian random variables. We restate the classical result stated in equation (136) for convenience. For any d 1, any polynomial P : Rd R of degree k, and any t (2e)k/2, we have P |P(ξ)| t EP(ξ)2 1 2et2/k , (136) where ξ N(0, In) denotes a standard Gaussian vector in n dimensions. Recall the notation from equation (90) and observe that ˆax,i 2 2 = θx,i, and ˆa x,iˆax,j = θx,i,j. (137) We also have i=1 ζx,iˆax,iˆa x,i = J 1/2 x i=1 ζx,i aia i s2 x,i J 1/2 x = Id. (138) Further, using Lemma 10 we obtain i=1 ζx,iˆbx,iˆb x,i = J 1/2 x AxΛx (Gx αΛx) 1 Gx (Gx αΛx) 1 Λx A x J 1/2 x = 4κ2Id. (139) Throughout this section, we consider a fixed point x int (K). For brevity in our notation, we drop the dependence on x for terms like ζx,i, θx,i, ˆax,i (etc.) and denote them simply by ζi, θi, ˆai respectively. Chen, Dwivedi, Wainwright and Yu We introduce some matrices and vectors that would come in handy for our proofs. ζ1ˆa 1 ... ζnˆa n ζ1ˆb 1 ... ζnˆb n ζ1 ˆa1 2 2 ... ζn ˆan 2 2 , and vab = ζ1ˆa 1 ˆb1 ... ζnˆa nˆbn We claim that BB In, and Bb B b 4κ2In. (141a) To see these claims, note that equation (138) implies that B B = Id and consequently, BB is an orthogonal projection matrix and BB In. Next, note that from equation (139) we have that B b Bb κ2Id, which implies that Bb B b κ2In. In asserting both these arguments, we have used the fact that for any matrix B, the matrices BB and B B are PSD and have same set of eigenvalues. Next, we bound the ℓ2 norm of the vectors v and vab: Lem. 7 (e) 4d, and (141b) i=1 ζi ˆa i ˆbi 2 i=1 ζi ˆai 2 2 ˆbi 2 i=1 ζi ˆbi 2 2 = 4 trace(B b Bb) eqn. (141a) 16κ2d. We now prove the five claims of the lemma separately. I.2.1 Proof of bound (95a) Using Isserlis theorem (Isserlis, 1918) for fourth order Gaussian moments, we have i=1 ζi ˆa i ξ 2 !2 = ˆai 2 2 ˆaj 2 2 + 2 ˆa i ˆaj 2 = i,j=1 ζiζj θiθj + 2θ2 i,j 24d2, where the last follows from Lemma 7. Applying the bound (136) with k = 2 and t = e log(16 ϵ ). Note that the bound is valid since t (2e) for all ϵ (0, 1/30]. I.2.2 Proof of bound (95b) Applying Isserlis theorem for Gaussian moments, we obtain i=1 ζi ˆa i ξ 3 !2 = 9 i,j=1 ζiζj ˆai 2 2 ˆaj 2 2 ˆa i ˆaj | {z } =:N1 i,j=1 ζiζj ˆa i ˆaj 3 | {z } =:N2 We claim that N1 4d and N2 4d. Assuming these claims as given at the moment, we now complete the proof. We have E Pn i=1 ζi ˆa i ξ 3 2 60d. Applying the bound (136) Fast MCMC Sampling Algorithms on Polytopes with k = 3 and t = 2e ϵ 3/2, and verifying that t (2e)3/2 for ϵ (0, 1/30] yields the claim. We now turn to prove the bounds on N1 and N2. We have i,j=1 ζi ˆai 2 2 ˆa i ζj ˆaj 2 2 ˆaj = i=1 ζi ˆai 2 2 ˆai eqn. (141a) v 2 2 eqn. (141b) 4d. Next, applying Cauchy-Schwarz inequality and using equation (137), we obtain i,j=1 ζiζj ˆa i ˆaj 3 i,j=1 ζiζjθ2 i,j p θiθj (Lem. 3 (d)) 4 i,j=1 ζiζjθ2 i,j (Lem. 7 (d)) 4 i=1 ζiθi = 4d. I.2.3 Proof of bound (95c) Using Isserlis theorem for Gaussian moments, we have i=1 ζi ˆa i ξ 2 ˆb x,iξ !2 = i,j=1 ζiζj ˆai 2 2 ˆaj 2 2 ˆb i ˆbj | {z } :=N3 i,j=1 ζiζj ˆa i ˆaj ˆa i ˆbi ˆa j ˆbj | {z } :=N4 i,j=1 ζiζj ˆai 2 2 ˆb i ˆaj ˆa j ˆbj | {z } :=N5 i,j=1 ζiζj ˆa i ˆaj 2 ˆb i ˆbj | {z } :=N6 i,j=1 ζiζj ˆa i ˆaj ˆa i ˆbj ˆb i ˆaj | {z } :=N7 We claim that all terms Nk 16κ2d, k {3, 4, 5, 6, 7}. Putting the pieces together, we have i=1 ζi ˆa i ξ 2 ˆb x,iξ !2 240κ2d. Applying the bound (136) with k = 3 and t = 2e ϵ 3/2 yields the claim. Note that for the given definition of t, we have t (2e)3/2 for ϵ (0, 1/30] so that the bound (136) is valid. It is now left to prove the bounds on Nk for k {3, 4, 5, 6, 7}. We have i,j=1 ζi ˆai 2 2 ˆb i ζj ˆaj 2 2 ˆbj = i=1 ζi ˆai 2 2 ˆbi 2 = B b v 2 eqn. (141a) 4κ2 v 2 2 = eqn. (141b) 16κ2d, i,j=1 ζiζj ˆa i ˆaj ˆa i ˆbi ˆa j ˆbj = B vab 2 eqn. (141a) vab 2 eqn. (141c) 16κ2d, and i,j=1 ζiζj ˆai 2 2 ˆb i ˆaj ˆa j ˆbj = B vab B b v C S B vab 2 B b v 2 16κ2d. Chen, Dwivedi, Wainwright and Yu For the term N6, we have i,j=1 ζiζj ˆa i ˆaj 2 ˆb i ˆbj (C S) 1 2 i,j=1 ζiζj ˆa i ˆaj 2 ˆbi 2 (symm.in i,j) = i,j=1 ζiζj ˆa i ˆaj 2 ˆbi 2 (eqn. (138)) i=1 ζi ˆai 2 2 ˆbi 2 (Lem. 3(d)) 4 i=1 ζi ˆbi 2 (eqn. (141c)) 16κ2d. The bound on the term N7 can be obtained in a similar fashion. I.2.4 Proof of bound (95d) Observe that ˆa i ξ N (0, θi) and hence E ˆa i ξ 8 = 105 θ4 i . Thus, we have i=1 ζi ˆa i ξ 4 !2 C S E ˆa i ξ 8 1 2 E ˆa j ξ 8 1 i,j=1 ζiζjθ2 i θ2 j = 105 Now applying Lemma 7, we obtain that E Pn i=1 ζi ˆa i ξ 4 2 1680d2. Consequently, applying the bound (136) with k = 4 and t = e ϵ 2 and noting that t (2e)2 for ϵ (0, 1/30], yields the claim. I.2.5 Proof of bound (95e) Using the fact that E ˆa i ξ 12 = 945 θ6 i and an argument similar to the previous part yields that E Pn i=1 ζi ˆa i ξ 6 2 15120d2. Finally, applying the bound (136) with k = 6 and t = e ϵ 3, and verifying that t (2e)3 for ϵ (0, 1/30], yields the claim. Appendix J. Proof of Lov asz s Lemma We begin by formally defining the conductance (Φ) of a Markov chain on (K, B(K)) with arbitrary transition operator T and stationary distribution π . We assume that the operator T is lazy and thereby the stationary distribution π is unique. Let Tx = T (δx) denote the transition distribution at point x, then the conductance Φ is defined as Φ := inf S B(K) π (S) (0,1/2) Φ(S) π (S) where Φ(S) := Z S Tu(K Sc)dπ (u) for any S K. Fast MCMC Sampling Algorithms on Polytopes The conductance denotes the measure of the flow from a set to its complement relative to its own measure, when initialized in the stationary distribution. If the conductance is high, the following result shows that the Markov chain mixes fast. Lemma 15 (Lov asz and Simonovits, 1993, Theorem 1.4) For any M-warm start µ0, the mixing time of the Markov chain with conductance Φ is bounded as T k(µ0) π TV Note that this result holds for a general distribution π although we apply for uniform π . The result can be derived from Cheeger s inequality for continuous-space discrete-time Markov chain and elementary results in Calculus. See, e.g., Theorem 1.4 and Corollary 1.5 by Lov asz and Simonovits (1993) for a proof. For ease in notation define K\S := K Sc. We now state a key isoperimetric inequality. Lemma 16 (Lov asz, 1999, Theorem 6) For any measurable sets S1, S2 K, we have vol(K\S1\S2) vol(K) d K(S1, S2) vol(S1) vol(S2), where d K(S1, S2) := infx S1,y S2 d K(x, y). Since π is the uniform measure on K, this lemma implies that π (K\S1\S2) d K(S1, S2) π (S1) π (S2). (142) In fact, such an inequality holds for an arbitrary log-concave distribution (Lov asz and Vempala, 2003). In words, the inequality says that for a bounded convex set any two subsets which are far apart, can not have a large volume. Taking these lemmas as given, we now complete the proof. Proof of (Lov asz s) Lemma 6: We first bound the conductance of the Markov chain using the assumptions of the lemma. From Lemma 15, we see that the Markov chain mixes fast if all the sets S have a high conductance Φ(S). We claim that from which the proof follows by applying Lemma 15. We now prove the claim (143) along the lines of Theorem 11 in the paper by Lov asz (1999). In particular, we show that under the assumptions in the lemma, the sets with bad conductance are far apart and thereby have a small measure under π , whence the ratio Φ(S)/π (S) is not arbitrarily small. Consider a partition S1, S2 of the set K such that S1 and S2 are measurable. To prove claim (143), it suffices to show that S1 Tu(S2)du ρ 64 min {π (S1), π (S2)} , (144) Define the sets S 1 := u S1 , S 2 := v S2 , and S 3 := K\S 1\S 2. Chen, Dwivedi, Wainwright and Yu Case 1: If we have vol(S 1) vol(S1)/2 and consequently vol(K\S 1) vol(S1)/2, then S1 Tu(S2)du (i) 1 S1\S 1 Tu(S2)du (ii) ρ 4 vol(S1) (iii) ρ 4 min {vol(S1), vol(S2)} , which implies the inequality (144) since π is the uniform measure on K. In the above sequence of inequalities, step (i) follows from the definition of the kernel T , step (ii) follows from the definition of the set S 1 (145) and step (iii) from the fact that < 1. Dividing both sides by vol(K) yields the inequality (144) and we are done. Case 2: It remains to establish the inequality (144) for the case when vol(S i) vol(Si)/2 for each i {1, 2}. Now for any u S 1 and v S 2 we have Tu Tv TV Tu(S1) Tv(S1) = 1 Tu(S2) Tv(S1) > 1 ρ, and hence by assumption we have d K(S 1, S 2) . Applying Lemma 16 and the definition of S 3 (145) we find that vol(S 3) vol(K) vol(S 1) vol(S 2) 4 vol(S1) vol(S2). (146) Using this inequality and the fact that for any x [0, 1] we have x(1 x) min {x, (1 x)} /2 we obtain that 4 π (S1) π (S2) 8 min {π (S1), π (S2)} . (147) We claim that Z S1 Tu(S2)du = Z S2 Tv(S1)dv. (148) Assuming the claim as given, we now complete the proof. Using the equation (148), we have S1 Tu(S2)du = 1 2 vol(K) S1 Tu(S2)du + Z S2 Tv(S1)dv (i) 1 2 vol(K) S1\S 1 Tu(S2)du + 1 S2\S 2 Tv(S2)dv 8 vol(S 3) vol(K) 64 min {π (S1), π (S2)} , where step (i) follows from the definition of the kernel T , step (ii) follows from the definition of the set S 3 (145) and step (iii) follows from the inequality (147). Putting together the pieces yields the claim (143). Fast MCMC Sampling Algorithms on Polytopes It remains to prove the claim (148). We make use of the following result Φ(S) = Φ(K\S) for any measurable S K. (149) Using equation (149) and noting that S1 = K\S2, we have S1 Tu(S2)du = Z S1 Tu(S2)π (u)du = Φ(S1) = Φ(K\S1) = 1 vol(K) S2 Tv(S1)dv, which yields equation (148). Proof of result (149): Note that R K Tu(S)dπ (u) = π (S). Thus, we have K\S Tu(S)dπ (u) = Z K Tu(S)dπ (u) Z S Tu(S)dπ (u) = π (S) Z S Tu(S)dπ (u). Using the fact that 1 Tu(S) = Tu(K\S), we obtain S Tu(S)dπ (u) = Z S Tu(S)dπ (u) = Z S Tu(K\S)dπ (u) = Φ(S), thereby yielding the claim (149). Kurt M Anstreicher. The volumetric barrier for semidefinite programming. Mathematics of Operations Research, 25(3):365 380, 2000. Claude J. P. B elisle, H. Edwin Romeijn, and Robert L. Smith. Hit-and-run algorithms for generating multivariate distributions. Mathematics of Operations Research, 18(2): 255 266, 1993. Dimitris Bertsimas and Santosh Vempala. Solving convex programs by random walks. Journal of the ACM (JACM), 51(4):540 556, 2004. Rajendra Bhatia. Matrix Analysis, volume 169. Springer Science & Business Media, 2013. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. Pierre Br emaud. Markov chains, Gibbs fields, Monte Carlo simulation, and queues. Springer, 1991. Steve Brooks, Andrew Gelman, Galin L Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. Chapman and Hall/CRC, 2011. Peter J Bushell. Hilbert s metric and positive contraction mappings in a Banach space. Archive for Rational Mechanics and Analysis, 52(4):330 338, 1973. Ben Cousins and Santosh Vempala. A cubic algorithm for computing Gaussian volume. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1215 1228. Society for Industrial and Applied Mathematics, 2014. Chen, Dwivedi, Wainwright and Yu I. Dikin. Iterative solution to problems of linear and quadratic programming. Doklady Akademii Nauk SSSR, 174(4):747, 1967. Jon Feldman, Martin J Wainwright, and David R Karger. Using linear programming to decode binary linear codes. IEEE Transactions on Information Theory, 51(3):954 972, 2005. Stuart Geman and David Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. PAMI, 6:721 741, 1984. Adan Gustafson and Hariharan Narayanan. John s walk. ar Xiv preprint ar Xiv:1803.02032, 2018. W. Keith Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97 109, 1970. Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 2012. Kuo-Ling Huang and Sanjay Mehrotra. An empirical evaluation of walk-and-round heuristics for mixed integer linear programs. Computational Optimization and Applications, 55 (3):545 570, 2013. Kuo-Ling Huang and Sanjay Mehrotra. An empirical evaluation of a walk-relax-round heuristic for mixed integer convex programs. Computational Optimization and Applications, 60(3):559 585, 2015. Leon Isserlis. On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika, 12(1/2):134 139, 1918. Svante Janson. Gaussian Hilbert Spaces, volume 129. Cambridge University Press, 1997. Fritz John. Extremum problems with inequalities as subsidiary conditions. In O.E. Neugebauer In K. O. Friedrichs and J. J. Stoker, editors, Studies and Essays: Courant Anniversary Volume, pages 187 204. Wiley-Interscience, New York, 1948. Ravi Kannan, L aszl o Lov asz, and Mikl os Simonovits. Random walks and an o*(n5) volume algorithm for convex bodies. Random Structures & Algorithms, 11(1):1 50, 1997. Ravi Kannan, L aszl o Lov asz, and Ravi Montenegro. Blocking conductance and mixing in random walks. Combinatorics, Probability and Computing, 15(4):541 570, 2006. Ravindran Kannan and Hariharan Narayanan. Random walks on polytopes and an affine interior point method for linear programming. Mathematics of Operations Research, 37 (1):1 20, 2012. Sebastian C. Kapfer and Werner Krauth. Sampling from a polytope and hard-disk Monte Carlo, 2013. Jim Lawrence. Polytope volume computation. Mathematics of Computation, 57(195):259 271, 1991. Fast MCMC Sampling Algorithms on Polytopes Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solving linear programs in O( rank) iterations and faster algorithms for maximum flow. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 424 433. IEEE, 2014. Yin Tat Lee and Santosh S. Vempala. Geodesic walks in polytopes. ar Xiv preprint ar Xiv:1606.04696, 2016. Yin Tat Lee and Santosh S Vempala. Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1115 1121. ACM, 2018a. Yin Tat Lee and Santosh S Vempala. Stochastic localization+ Stieltjes barrier= tight bound for log-sobolev. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1122 1129. ACM, 2018b. L aszl o Lov asz. Hit-and-run mixes fast. Mathematical Programming, 86(3):443 461, 1999. L aszl o Lov asz and Mikl os Simonovits. The mixing rate of Markov chains, an isoperimetric inequality, and computing the volume. In Proceedings of 31st Annual Symposium on Foundations of Computer Science, 1990, pages 346 354. IEEE, 1990. L aszl o Lov asz and Mikl os Simonovits. Random walks in a convex body and an improved volume algorithm. Random Structures & Algorithms, 4(4):359 412, 1993. L aszl o Lov asz and Santosh Vempala. Hit-and-run is fast and fun. Tehnical Report, Microsoft Research, 2003. L aszl o Lov asz and Santosh Vempala. Hit-and-run from a corner. SIAM Journal on Computing, 35(4):985 1005, 2006a. L aszl o Lov asz and Santosh Vempala. Simulated annealing in convex bodies and an O (n4) volume algorithm. Journal of Computer and System Sciences, 72(2):392 417, 2006b. Michael W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2):123 224, 2011. Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087 1092, 1953. Hariharan Narayanan. Randomized interior point methods for sampling and optimization. The Annals of Applied Probability, 26(1):597 641, 2016. Hariharan Narayanan and Alexander Rakhlin. Efficient sampling from time-varying logconcave distributions. ar Xiv preprint ar Xiv:1309.5977, 2013. Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM, 1994. Brian D. Ripley. Stochastic simulation, volume 316. John Wiley & Sons, 2009. Chen, Dwivedi, Wainwright and Yu Christian P. Robert. Monte Carlo methods. Wiley Online Library, 2004. Sushant Sachdeva and Nisheeth K. Vishnoi. The mixing time of the Dikin walk in a polytope a simple proof. Operations Research Letters, 44(5):630 634, 2016. Robert L Smith. Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Operations Research, 32(6):1296 1308, 1984. Pravin M. Vaidya. A new algorithm for minimizing convex functions over convex sets. In 30th Annual Symposium on Foundations of Computer Science, 1989, pages 338 343. IEEE, 1989. Pravin M. Vaidya and David S. Atkinson. A technique for bounding the number of iterations in path following algorithms. In Complexity in Numerical Optimization, pages 462 489. World Scientific, 1993. Santosh Vempala. Geometric random walks: a survey. Combinatorial and Computational Geometry, 52(573-612):2, 2005. Bin Yu and Per Mykland. Looking at Markov samplers through cusum path plots: a simple diagnostic idea. Statistics and Computing, 8(3):275 286, 1998.