# on_markov_chain_gradient_descent__b9127e69.pdf On Markov Chain Gradient Descent Tao Sun College of Computer National University of Defense Technology Changsha, Hunan 410073, China nudtsuntao@163.com Yuejiao Sun Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA sunyj@math.ucla.edu Wotao Yin Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA wotaoyin@math.ucla.edu Stochastic gradient methods are the workhorse (algorithms) of large-scale optimization problems in machine learning, signal processing, and other computational sciences and engineering. This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain. Existing results of this method assume convex objectives and a reversible Markov chain and thus have their limitations. We establish new non-ergodic convergence under wider step sizes, for nonconvex problems, and for non-reversible finite-state Markov chains. Nonconvexity makes our method applicable to broader problem classes. Non-reversible finite-state Markov chains, on the other hand, can mix substatially faster. To obtain these results, we introduce a new technique that varies the mixing levels of the Markov chains. The reported numerical results validate our contributions. 1 Introduction In this paper, we consider a stochastic minimization problem. Let Ξ be a statistical sample space with probability distribution Π (we omit the underlying σ-algebra). Let X Rn be a closed convex set, which represents the parameter space. F( ; ξ) : X R is a closed convex function associated with ξ Ξ. We aim to solve the following problem: minimize x X Rn Eξ F(x; ξ) = Z Π F(x, ξ)dΠ(ξ). (1) A common method to minimize (1) is Stochastic Gradient Descent (SGD) [11]: xk+1 = Proj X xk γk F(xk; ξk) , samples ξk i.i.d Π. (2) However, for some problems and distributions, direct sampling from Π is expensive or impossible, and it is possible that the sample space Ξ is not explicitly known. In these cases, it can be much cheaper to sample by following a Markov chain that has a desired equilibrium distribution Π. The work is supported in part by the National Key R&D Program of China 2017YFB0202902, China Scholarship Council. The work of Y. Sun and W. Yin was supported in part by AFOSR MURI FA9550-18-1-0502, NSF DMS-1720237, NSFC 11728105, and ONR N000141712162. 32nd Conference on Neural Information Processing Systems (Neur IPS 2018), Montréal, Canada. To be concrete, imagine solving problem (1) with a discrete space Ξ := {x {0, 1}n | a, x b}, where a Rn and b R, and the uniform distribution Π over Ξ. A straightforward way to obtain a uniform sample is iteratively randomly sampling x {0, 1}n until the constraint a, x b is satisfied. Even if the feasible set is small, it may take up to O(2n) iterations to get a feasible sample. Instead, one can sample a trajectory of a Markov chain described in [4]; to obtain a sample ε-close to the distribution Π, one only needs log( |Ξ| ε ) exp(O( n(log n) 5 2 )) samples [2], where |Ξ| is the cardinality of Ξ. This presents a signifant saving in sampling cost. Markov chains also naturally arise in some applications. Common examples are systems that evolve according to Markov chains, for example, linear dynamic systems with random transitions or errors. Another example is a distributed system in which every node locally stores a subset of training samples; to train a model using these samples, we can let a token that holds all the model parameters traverse the nodes following a random walk, so the samples are accessed according to a Markov chain. Suppose that the Markov chain has a stationary distribution Π and a finite mixing time T, which is how long a random trajectory needs to be until its current state has a distribution that roughly matches Π. A larger T means a closer match. Then, in order to run one iteration of (2), we can generate a trajectory of samples ξ1, ξ2, ξ3, . . . , ξT and only take the last sample ξ := ξT . To run another iteration of (2), we repeat this process, i.e., sample a new trajectory ξ1, ξ2, ξ3, . . . , ξT and take ξ := ξT . Clearly, sampling a long trajectory just to use the last sample wastes a lot of samples, especially when T is large. But, this may seem necessary because ξt, for all small t, have large biases. After all, it can take a long time for a random trajectory to explore all of the space, and it will often double back and visit states that it previously visited. Furthermore, it is also difficult to choose an appropriate T. A small T will cause large bias in ξT , which slows the SGD convergence and reduces its final accuracy. A large T, on the other hand, is wasteful especially when xk is still far from convergence and some bias does not prevent (2) to make good progress. Therefore, T should increase adaptively as k increases this makes the choice of T even more difficult. So, why waste samples, why worry about T, and why not just apply every sample immediately in stochastic gradient descent? This approach has appeared in [5, 6], which we call the Markov Chain Gradient Descent (MCGD) algorithm for problem (1): xk+1 = Proj X xk γk ˆ F(xk; ξk) , (3) where ξ0, ξ1, . . . are samples on a Markov chain trajectory and ˆ F(xk; ξk) F(xk; ξk) is a subgradient. Let us examine some special cases. Suppose the distribution Π is supported on a set of M points, y1, . . . , y M. Then, by letting fi(x) := M Prob(ξ = yi) F(x, yi), problem (1) reduces to the finite-sum problem: minimize x X Rd f(x) 1 i=1 fi(x). (4) By the definition of fi, each state i has the uniform probability 1/M. At each iteration k of MCGD, we have xk+1 = Proj X xk γk ˆ fjk(xk) , (5) where (jk)k 0 is a trajectory of a Markov chain on {1, 2, . . . , M} that has a uniform stationary distribution. Here, (ξk)k 0 Π and (jk)k 0 [M] are two different, but related Markov chains. Starting from a deterministic and arbitrary initialization x0, the iteration is illustrated by the following diagram: j0 j1 j2 . . . y y y x0 x1 x2 x3 . . . In the diagram, given each jk, the next state jk+1 is statistically independent of jk 1, . . . , j0; given jk and xk, the next iterate xk+1 is statistically independent of jk 1, . . . , j0 and xk 1, . . . , x0. Another application of MCGD involves a network: consider a strongly connected graph G = (V, E) with the set of vertices V = {1, 2, . . . , M} and set of edges E V V. Each node j {1, 2, . . . , M} possess some data and can compute fj( ). To run MCGD, we employ a token that carries the variable x, walking randomly over the network. When it reaches a node j, node j reads x form the token and computes fj( ) to update x according to (5). Then, the token walks away to a random neighbor of node j. 1.1 Numerical tests We present two kinds of numerical results. The first one is to show that MCGD uses fewer samples to train both a convex model and a nonconvex model. The second one demonstrates the advantage of the faster mixing of a non-reversible Markov chain. Our results on nonconvex objective and non-reversible chains are new. 1. Comparision with SGD Let us compare: 1. MCGD (3), where jk is taken from one trajectory of the Markov chain; 2. SGDT, for T = 1, 2, 4, 8, 16, 32, where each jk is the Tth sample of a fresh, independent trajectory. All trajectories are generated by starting from the same state 0. To compute T gradients, SGDT uses T times as many samples as MCGD. We did not try to adapt T as k increases because there lacks a theoretical guidance. In the first test, we recover a vector u from an auto regressive process, which closely resembles the first experiment in [1]. Set matrix A as a subdiagonal matrix with random entries Ai,i 1 i.i.d U[0.8, 0.99]. Randomly sample a vector u Rd, d = 50, with the unit 2-norm. Our data (ξ1 t , ξ2 t ) t=1 are generated according to the following auto regressive process: ξ1 t = Aξ1 t 1 + e1Wt, Wt i.i.d N(0, 1) ξ2 t = 1, if u, ξ1 t > 0, 0, otherwise; ξ2 t = ξ2 t , with probability 0.8, 1 ξ2 t , with probability 0.2. Clearly, (ξ1 t , ξ2 t ) t=1 forms a Markov chain. Let Π denote the stationary distribution of this Markov chain. We recover u as the solution to the following problem: minimize x E(ξ1,ξ2) Πℓ(x; ξ1, ξ2). We consider both convex and nonconvex loss functions, which were not done before in the literature. The convex one is the logistic loss ℓ(x; ξ1, ξ2) = ξ2 log(σ( x, ξ1 )) (1 ξ2) log(1 σ( x, ξ1 )), where σ(t) = 1 1+exp( t). And the nonconvex one is taken as ℓ(x; ξ1, ξ2) = 1 2(σ( x, ξ1 ) ξ2)2 from [7]. We choose γk = 1 kq as our stepsize, where q = 0.501. This choice is consistently with our theory below. Our results in Figure 1 are surprisingly positive on MCGD, more so to our expectation. As we had expected, MCGD used significantly fewer total samples than SGD on every T. But, it is surprising that MCGD did not need even more gradient evaluations. Randomly generated data must have helped homogenize the samples over the different states, making it less important for a trajectory to converge. It is important to note that SGD1 and SGD2, as well as SGD4, in the nonconvex case, stagnate at noticeably lower accuracies because their T values are too small for convergence. 100 101 102 103 104 Number of gradient evaluations 100 Convex case MCSGD SGD1 SGD2 SGD4 SGD8 SGD16 SGD32 100 101 102 103 104 Number of samples 100 Convex case 100 101 102 103 104 Number of gradient evaluations 10-1 Nonconvex case 100 101 102 103 104 Number of samples 10-1 Nonconvex case Figure 1: Comparisons of MCGD and SGDT for T = 1, 2, 4, 8, 16, 32. xk is the average of x1, . . . , xk. 2. Comparison of reversible and non-reversible Markov chains We also compare the convergence of MCGD when working with reversible and non-reversible Markov chains (the definition of reversibility is given in next section). As mentioned in [14], transforming a reversible Markov chain into non-reversible Markov chain can significantly accelerate the mixing process. This technique also helps to accelerate the convergence of MCGD. In our experiment, we first construct an undirected connected graph with n = 20 nodes with edges randomly generated. Let G denote the adjacency matrix of the graph, that is, Gi,j = 1, if i, j are connected; 0, otherwise. Let dmax be the maximum number of outgoing edges of a node. Select d = 10 and compute β N(0, Id). The transition probability of the reversible Markov chain is then defined by, known as Metropolis-Hastings markov chain, 1 dmax , if j = i, Gi,j = 1; dmax , if j = i; 0, otherwise. 100 101 102 103 104 Reversible Non-reversible Figure 2: Comparison of reversible and irreversible Markov chains. The second largest eigenvalues of reversible and nonreversible Markov chains are 0.75 and 0.66 respectively. Obviously, P is symmetric and the stationary distribution is uniform. The non-reversible Markov chain is constructed by adding cycles. The edges of these cycles are directed and let V denote the adjacency matrix of these cycles. If Vi,j = 1, then Vj,i = 0. Let w0 > 0 be the weight of flows along these cycles. Then we construct the transition probability of the non-reversible Markov chain as follows, Qi,j = Wi,j P where W = dmax P + w0V . See [14] for an explanation why this change makes the chain mix faster. In our experiment, we add 5 cycles of length 4, with edges existing in G. w0 is set to be dmax 2 . We test MCGD on a least square problem. First, we select β N(0, Id); and then for each node i, we generate xi N(0, Id), and yi = x T i β . The objective function is defined as, i=1 (x T i β yi)2. The convergence results are depicted in Figure 2. 1.2 Known approaches and results It is more difficult to analyze MCGD due to its biased samples. To see this, let pk,j be the probability to select fj in the kth iteration. SGD s uniform probability selection (pk,j 1 M ) yields an unbiased gradient estimate Ejk( fjk(xk)) = C f(xk) (7) for some C > 0. However, in MCGD, it is possible to have pk,j = 0 for some k, j. Consider a random walk . The probability pjk,j is determined by the current state jk, and we have pjk,i > 0 only for i N(jk) and pjk,i = 0 for i / N(jk), where N(jk) denotes the neighborhood of jk. Therefore, we no longer have (7). All analyses of MCGD must deal with the biased expectation. Papers [6, 5] investigate the conditional expectation Ejk+τ |jk( fjk+τ (xk)). For a sufficiently large τ Z+, it is sufficiently close to 1 M f(xk) (but still different). In [6, 5], the authors proved that, to achieve an ϵ error, MCGD with stepsize O(ϵ) can return a solution in O( 1 ϵ2 ) iteration. Their error bound is given in the ergodic sense and using liminf. The authors of [10] proved a lim inf f(xk) and Edist2(xk, X ) have almost sure convergence under diminishing stepsizes γk = 1 kq , 2 3 < q 1. Although the authors did not compute any rates, we computed that their stepsizes will lead to a solution with ϵ error in O( 1 ϵ 1 1 q ) iterations, 3 < q < 1, and O(e 1 ϵ ) for q = 1. In [1], the authors improved the stepsizes to γk = 1 showed ergodic convergence; in other words, to achieve ϵ error, it is enough to run MCGD for O( 1 ϵ2 ) iterations. There is no non-ergodic result regarding the convergence of f(xk). It is worth mentioning that [10, 1] use time non-homogeneous Markov chains, where the transition probability can change over the iterations as long as there is still a finite mixing time. In [1], MCGD is generalized from gradient descent to mirror descent. In all these works, the Markov chain is required to be reversible, and all functions fi, i [M], are assumed to be convex. However, non-reversible chains can have substantially faster convergence and thus more numerically efficient. 1.3 Our approaches and results In this paper, we improve the analyses of MCGD to non-reversible finite-state Markov chains and to nonconvex functions. The former allows us to have faster mixing, and the latter frequently appears in applications. Our convergence result is given in the non-ergodic sense though the rate results are still given the ergodic sense. It is important to mention that, in our analysis, the mixing time of the underlying Markov chain is not tied to a fixed mixing level but can vary to different levels. This is essential because MCGD needs time to reduce its objective error from its current value to a lower one, and this time becomes longer when the current value is lower since a more accurate Markov chain convergence and thus a longer mixing time are required. When f1, f2, . . . , f M are all convex, we allow them to be non-differentiable and MCGD to use subgradients, provided that X is bounded. When any of them is nonconvex, we assume X is the full space and f1, f2, . . . , f M are differentiable with bounded gradients. The bounded-gradient assumption is due to a technical difficulty associated with nonconvexity. Specifically, in the convex setting, we prove limk Ef(xk) = f (minimum of f over X) for both exact and inexact MCGD with stepsizes γk = 1 kq , 1 2 < q < 1. The convergence rates of MCGD with exact and inexact subgradient computations are presented. The first analysis of nonconvex MCGD is also presented with its convergence given in the expectation of f(xk) . These results hold for non-reversible finite-state Markov chains and can be extended to time non-homogeneous Markov chain under extra assumptions [10, Assumptions 4 and 5] and [1, Assumption C], which essentially ensure finite mixing. Our results for finite-state Markov chains are first presented in Sections 3 and 4. They are extended to continuous-state reversible Markov chains in Section 5. Some novel results are are developed based on new techniques and approaches developed in this paper. To get the stronger results in general cases, we used the varying mixing time rather than fixed ones. We list the possible extensions of MCGD that are not discussed in this paper. The first one is the accelerated versions including the Nesterov s acceleration and variance reduction schemes. The second one is the design and optimization of Markov chains to improve the convergence of MCGD. 2 Preliminaries 2.1 Markov chain We recall some definitions, properties, and existing results about the Markov chain. Although we use the finite-state time-homogeneous Markov chain, results can be extended to more general chains under similar extra assumptions in [10, Assumptions 4, 5] and [1, Assumption C]. Definition 1 (finite-state time-homogeneous Markov chain) Let P be an M M-matrix with real-valued elements. A stochastic process X1, X2, ... in a finite state space [M] := {1, 2, . . . , M} is called a time-homogeneous Markov chain with transition matrix P if, for k N, i, j [M], and i0, i1, . . . , ik 1 [M], we have P(Xk+1 = j | X0 = i0, X1 = i1, . . . , Xk = i) = P(Xk+1 = j | Xk = i) = Pi,j. (8) Let the probability distribution of Xk be denoted as the non-negative row vector πk = (πk 1, πk 2, . . . , πk M), that is, P(Xk = j) = πk j . π satisfies PM i=1 πk i = 1. When the Markov chain is time-homogeneous, we have πk = πk 1P and πk = πk 1P = = π0P k, (9) for k N, where P k denotes the kth power of P. A Markov chain is irreducible if, for any i, j [M], there exists k such that (P k)i,j > 0. State i [M] is said to have a period d if P k i,i = 0 whenever k is not a multiple of d and d is the greatest integer with this property. If d = 1, then we say state i is aperiodic. If every state is aperiodic, the Markov chain is said to be aperiodic. Any time-homogeneous, irreducible, and aperiodic Markov chain has a stationary distribution π = limk πk = [π 1, π 2, . . . , π M] with PM i=1 π i = 1 and mini{π i } > 0, and π = π P. It also holds that lim k P k = [(π ); (π ); . . . ; (π )] =: Π RM M. (10) The largest eigenvalue of P is 1, and the corresponding left eigenvector is π . Assumption 1 The Markov chain (Xk)k 0 is time-homogeneous, irreducible, and aperiodic. It has a transition matrix P and has stationary distribution π . 2.2 Mixing time Mixing time is how long a Markov chain evolves until its current state has a distribution very close to its stationary distribution. The literature has a thorough investigation of various kinds of mixing times, with the majority for reversible Markov chains (that is, πi Pi,j = πj Pj,i). Mixing times of non-reversible Markov chains are discussed in [3]. In this part, we consider a new type of mixing time of non-reversible Markov chain. The proofs are based on basic matrix analysis. Our mixing time gives us a direct relationship between k and the deviation of the distribution of the current state from the stationary distribution. To start a lemma, we review some basic notions in linear algebra. Let C be the n-dimensional complex field. The modulus of a complex number a C is given as |a|. For a vector x Cn, the ℓ and ℓ2 norms are defined as x := maxi |xi|, x 2 := p Pn i=1 |xi|2. For a matrix A = [ai,j] Cm n, its -induced and Frobenius norms are A := maxi,j |ai,j|, A F := q Pn i,j=1 |ai,j|2, respectively. We know P k Π , as k . The following lemma presents a deviation bound for finite k. Lemma 1 Let Assumption 1 hold and let λi(P) C be the ith largest eigenvalue of P, and λ(P) := max{|λ2(P)|, |λM(P)|} + 1 Then, we can bound the largest entry-wise absolute value of the deviation matrix δk := Π P k RM M as δk CP λk(P) (11) for k KP , where CP is a constant that also depends on the Jordan canonical form of P and KP is a constant that depends on λ(P) and λ2(P). Their formulas are given in (45) and (46) in the Supplementary Material. Remark 1 If P is symmetric, then all λi(P) s are all real and nonnegative, KP = 0, and CP M 3 2 . Furthermore, (42) can be improved by directly using λk 2(P) for the right side as δk δk F M 3 2 λk 2(P), k 0. 3 Convergence analysis for convex minimization This part considers the convergence of MCGD in the convex cases, i.e., f1, f2, . . . , f M and X are all convex. We investigate the convergence of scheme (5). We prove non-ergodic convergence of the expected objective value sequence under diminishing non-summable stepsizes, where the stepsizes are required to be almost" square summable. Therefore, the convergence requirements are almost equal to SGD. This section uses the following assumption. Assumption 2 The set X is assumed to be convex and compact. Now, we present the convergence results for MCGD in the convex (but not necessarily differentiable) case. Let f be the minimum value of f over X. Theorem 1 Let Assumptions 1 and 2 hold and (xk)k 0 be generated by scheme (5). Assume that fi, i [M], are convex functions, and the stepsizes satisfy X k γk = + , X k ln k γ2 k < + . (12) Then, we have lim k Ef(xk) = f . (13) ψ(P) := max{1, 1 ln(1/λ(P))}. E(f(xk) f ) = O ψ(P) where xk := Pk i=1 γixi Pk i=1 γi . Therefore, if we select the stepsize γk = O( 1 2 < q < 1, we get the rate E(f(xk) f ) = O( ψ(P ) Furthermore, consider the inexact version of MCGD: xk+1 = Proj X xk γk( ˆ fjk(xk) + ek) , (15) where the noise sequence (ek)k 0 is arbitrary but obeys ln k < + . (16) Then, for iteration (15), results (13) and (14) still hold; furthermore, if ek = O( 1 kp ) with p > 1 2 and γk = O( 1 2 < q < 1, the rate E(f(xk) f ) = O( ψ(P ) k1 q ) also holds. The stepsizes requirement (12) is nearly identical to the one of SGD and subgradient algorithms. In the theorem above, we use the stepsize setting γk = O( 1 2 < q < 1. This kind of stepsize requirements also works for SGD and subgradient algorithms. The convergence rate of MCGD is O( 1 Pk i=1 γi ) = O( 1 k1 q ), which is also as the same as SGD and subgradient algorithms for 4 Convergence analysis for nonconvex minimization This section considers the convergence of MCGD when one or more of fi is nonconvex. In this case, we assume fi, i = 1, 2, . . . , M, are differentiable and fi is Lipschitz with L2. We also set X as the full space. We study the following scheme xk+1 = xk γk fjk(xk). (17) We prove non-ergodic convergence of the expected gradient norm of f under diminishing nonsummable stepsizes. The stepsize requirements in this section are slightly stronger than those in the convex case with an extra ln k factor. In this part, we use the following assumption. Assumption 3 The gradients of fi are assumed to be bounded, i.e., there exists D > 0 such that fi(x) D, i [M]. (18) We use this new assumption because X is now the full space, and we have to directly bound the size of fi(x) . In the nonconvex case, we cannot obtain objective value convergence, and we only bound the gradients. Now, we are prepared to present our convergence results of nonconvex MCGD. Theorem 2 Let Assumptions 1 and 3 hold and (xk)k 0 be generated by scheme (17). Also, assume fi is differentiable and fi is L-Lipschitz, and the stepsizes satisfy X k γk = + , X k ln2 k γ2 k < + . (19) Then, we have lim k E f(xk) = 0. (20) E min 1 i k{ f(xi) 2} = O ψ(P) where ψ(P) is given in Lemma 1. If we select the stepsize as γk = O( 1 2 < q < 1, then we get the rate E min1 i k{ f(xi) 2} = O( ψ(P ) Furthermore, let (ek)k 0 be a sequence of noise and consider the inexact nonconvex MCGD iteration: xk+1 = xk γk fjk(xk) + ek . (22) If the noise sequence obeys k=1 γk ek < + , (23) then the convergence results (20) and (21) still hold for inexact nonconvex MCGD. In addition, if we set γk = O( 1 2 < q < 1 and the noise satisfy ek = O( 1 kp ) for p + q > 1, then (20) still holds and E min1 i k{ f(xi) 2} = O( ψ(P ) This proof of Theorem 2 is different from previous one. In particular, we cannot expect some sort of convergence to f(x ), where x argmin f due to nonconvexity. To this end, we use the Lipschitz continuity of fi (i [M]) to derive the descent". Here, the O" contains a polynomial compisition of constants D and L. Compared with MCGD in the convex case, the stepsize requirements of nonconvex MCGD become a tad higher; in summable part, we need P k ln2 k γ2 k < + rather than P k ln k γ2 k < + . Nevertheless, we can still use γk = O( 1 2This is for the convenience of the presentation in the proofs. If each fi has a Li, it is possible to improve our results slights. But, we simply set L := maxi{Li} 5 Convergence analysis for continuous state space When the state space Ξ is a continuum, there are infinitely many possible states. In this case, we consider an infinite-state Markov chain that is time-homogeneous and reversible. Using the results in [8, Theorem 4.9], the mixing time of this kind of Markov chain still has geometric decrease like (11). Since Lemma 1 is based on a linear algebra analysis, it no longer applies to the continuous case. Nevertheless, previous results still hold with nearly unchanged proofs under the following assumption: Assumption 4 For any ξ Ξ, |F(x; ξ) F(y; ξ)| L x y , supx X,ξ Ξ{ ˆ F(x; ξ) } D, Eξ ˆ F(x; ξ) EξF(x; ξ), and supx,y X,ξ Ξ |F(x; ξ) F(y; ξ)| H. We consider the general scheme xk+1 = Proj X xk γk( ˆ F(xk; ξk) + ek) , (24) where ξk are samples on a Markov chain trajectory. If ek 0, the scheme then reduces to (3). Corollary 1 Assume F( ; ξ) is convex for each ξ Ξ. Let the stepsizes satisfy (12) and (xk)k 0 be generated by Algorithm (24), and (ek)k 0 satisfy (16). Let F := minx X Eξ(F(x; ξ)). If Assumption 4 holds and the Markov chain is time-homogeneous, irreducible, aperiodic, and reversible, then we have lim k E Eξ(F(xk; ξ)) F = 0, E(Eξ(F(xk; ξ)) F ) = O max{1, 1 ln(1/λ)} Pk i=1 γi where 0 < λ < 1 is the geometric rate of the mixing time of the Markov chain (which corresponds to λ(P) in the finite-state case). Next, we present our result for a possibly nonconvex objective function F( ; ξ) under the following assumption. Assumption 5 For any ξ Ξ, F(x; ξ) is differentiable, and F(x; ξ) F(y; ξ) L x y . In addition, supx X,ξ Ξ{ F(x; ξ) } < + , X is the full space, and Eξ F(x; ξ) = EξF(x; ξ). Since F(x, ξ) is differentiable and X is the full space, the iteration reduces to xk+1 = xk γk( F(xk; ξk) + ek). (25) Corollary 2 Let the stepsizes satisfy (19), (xk)k 0 be generated by Algorithm (25), the noises obey (23), and Assumption 5 hold. Assume the Markov chain is time-homogeneous, irreducible, and aperiodic and reversible. Then, we have lim k E Eξ(F(xk; ξ)) = 0, E( min 1 i k{ Eξ(F(xi; ξ)) 2}) = O max{1, 1 ln(1/λ)} Pk i=1 γi where 0 < λ < 1 is geometric rate for the mixing time of the Markov chain. 6 Conclusion In this paper, we have analyzed the stochastic gradient descent method where the samples are taken on a trajectory of Markov chain. One of our main contributions is non-ergodic convergence analysis for convex MCGD, which uses a novel line of analysis. The result is then extended to the inexact gradients. This analysis lets us establish convergence for non-reversible finite-state Markov chains and for nonconvex minimization problems. Our results are useful in the cases where it is impossible or expensive to directly take samples from a distribution, or the distribution is not even known, but sampling via a Markov chain is possible. Our results also apply to decentralized learning over a network, where we can employ a random walker to traverse the network and minimizer the objective that is defined over the samples that are held at the nodes in a distribute fashion. [1] John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549 1578, 2012. [2] Martin Dyer, Alan Frieze, Ravi Kannan, Ajai Kapoor, Ljubomir Perkovic, and Umesh Vazirani. A mildly exponential time algorithm for approximating the number of solutions to a multidimensional knapsack problem. Combinatorics, Probability and Computing, 2(3):271 284, 1993. [3] James Allen Fill. Eigenvalue bounds on convergence to stationarity for nonreversible markov chains, with an application to the exclusion process. The annals of applied probability, 1(1):62 87, 1991. [4] Mark Jerrum and Alistair Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. Citeseer, 1996. [5] Bjorn Johansson, Maben Rabi, and Mikael Johansson. A simple peer-to-peer algorithm for distributed optimization in sensor networks. In Decision and Control, 2007 46th IEEE Conference on, pages 4705 4710. IEEE, 2007. [6] Björn Johansson, Maben Rabi, and Mikael Johansson. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157 1170, 2009. [7] Song Mei, Yu Bai, Andrea Montanari, et al. The landscape of empirical risk for nonconvex losses. The Annals of Statistics, 46(6A):2747 2774, 2018. [8] Ravi Montenegro, Prasad Tetali, et al. Mathematical aspects of mixing times in markov chains. Foundations and Trends R in Theoretical Computer Science, 1(3):237 354, 2006. [9] Rufus Oldenburger et al. Infinite powers of matrices and characteristic roots. Duke Mathematical Journal, 6(2):357 361, 1940. [10] S Sundhar Ram, A Nedi c, and Venugopal V Veeravalli. Incremental stochastic subgradient algorithms for convex optimization. SIAM Journal on Optimization, 20(2):691 717, 2009. [11] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400 407, 1951. [12] Herbert Robbins and David Siegmund. A convergence theorem for non negative almost supermartingales and some applications. In Optimizing methods in statistics, pages 233 257. Elsevier, 1971. [13] Ralph Tyrell Rockafellar. Convex Analysis. Princeton university press, 2015. [14] Konstantin S Turitsyn, Michael Chertkov, and Marija Vucelja. Irreversible monte carlo algorithms for efficient sampling. Physica D: Nonlinear Phenomena, 240(4-5):410 414, 2011. [15] Jinshan Zeng and Wotao Yin. On nonconvex decentralized gradient descent. IEEE Transactions on signal processing, 66(11):2834 2848, 2018.