# accelerating_stochastic_composition_optimization__f9997168.pdf Journal of Machine Learning Research 18 (2017) 1-23 Submitted 10/16; Revised 6/17; Published 10/17 Accelerating Stochastic Composition Optimization Mengdi Wang mengdiw@princeton.edu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA Ji Liu ji.liu.uwisc@gmail.com Department of Computer Science and Department of Electrical and Computer Engineering University of Rochester Rochester, NY 14627, USA Ethan X. Fang xxf13@psu.edu Department of Statistics and Department of Industrial and Manufacturing Engineering Pennsyvania State University University Park, PA 16802, USA Editor: Inderjit Dhillon We consider the stochastic nested composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method. This algorithm updates the solution based on noisy gradient queries using a two-timescale iteration. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments. Keywords: Large-scale optimization, stochastic gradient, composition optimization, sample complexity. 1. Introduction The popular stochastic gradient methods are well suited for minimizing expected-value objective functions or the sum of a large number of loss functions. Stochastic gradient methods find wide applications in estimation, online learning, and training of deep neural networks. Despite their popularity, they do not apply to the minimization of a nonlinear function involving expected values or a composition between two expected-value functions. In this paper, we consider the stochastic composition problem, given by min x X H(x) := Ev(fv(Ew(gw(x)))) | {z } =:F(x) . Both authors contributed equally. c 2017 Mengdi Wang, Ji Liu and Ethan X. Fang. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-504.html. Wang, Liu, and Fang where f(g(x)) = (f g)(x) denotes the function composition, gw( ) : ℜn 7 ℜm and fv( ) : ℜm 7 ℜare continuously differentiable functions, v, w are random variables, and R(x) : ℜn 7 ℜ {+ } is an extended real-valued closed convex function, X is a convex and closed set. We assume throughout that there exists at least one optimal solution x X to problem (1). We focus on the case where fv and gw are smooth, but we allow R to be a nonsmooth penalty such as the ℓ1-norm. We do no require either the outer function fv or the inner function gw to be convex or monotone. As a result, the composition problem cannot be reformulated into a saddle point problem in general. Our algorithmic objective is to develop efficient algorithms for solving problem (1) based on random evaluations of fv, gw and their gradients. Our theoretical objective is to analyze the rate of convergence for the stochastic algorithm and to improve it when possible. In the online setting, the iteration complexity of our stochastic methods can be interpreted as a sample-error complexity upper bound for estimating the optimal solution of problem (1). 1.1 Motivating Examples One motivating example is reinforcement learning (Sutton and Barto, 1998). Consider a controllable Markov chain with states 1, . . . , S. Estimating the value-per-state of a fixed control policy π is known as on-policy learning. It can be casted into an S S system of Bellman equations: γP πV π + rπ = V π, where γ (0, 1) is a discount factor, P π s s is the transition probability from state s to state s, and rπ s is the expected state transition reward at state s. The solution V π to the Bellman equation is the value vector, with V π(s) being the total expected reward starting at state s. In the blackbox simulation environment, P π, rπ are unknown but can be sampled from a simulator. As a result, solving the Bellman equation becomes a special case of the stochastic composition optimization problem: min x X E[A]x E[b] 2, (2) where A, B, b are random matrices and random vectors such that E[A] = I γP π and E[b] = rπ. It can be viewed as the composition of the square norm function f( ) fv( ) = and the expected linear function g(x) = E[A]x E[b]. We will give more details on the reinforcement learning application in Section 4. Another motivating example is risk-averse learning. For example, consider the meanvariance minimization problem min x X Ea,b[h(x; a, b)] + λVara,b[h(x; a, b)], where h(x; a, b) is some loss function parameterized by random variables a and b, and λ > 0 is a regularization parameter. In particular, the mean-variance minimization takes the composition form where g(x) = (E[x], E[h(x; a, b)]), and f(y1, y2) = E[h(y1; a, b)] + λE h(y1; a, b) y2 2 . Its batch version takes the form min x X 1 N i=1 h(x; ai, bi) + λ h(x; ai, bi) 1 i=1 h(x; ai, bi) Accelerating Stochastic Composition Optimization Here the variance term is the composition of the mean square function and an expected loss function. Although the stochastic composition problem (1) was barely studied, it actually finds a broad spectrum of emerging applications in estimation and machine learning (see Wang et al. (2017) for a list of applications). Fast optimization algorithms with theoretical guarantees will lead to new computation tools and online learning methods for a broader problem class, no longer limited to the expectation minimization problem. 1.2 Related Works and Contributions Contrary to the expectation minimization problem, unbiased" gradient samples are no longer available for the stochastic composition problem (1). The objective is nonlinear in the joint probability distribution of (w, v), which substantially complicates the problem. In a recent work by Dentcheva et al. (2016), a special case of the stochastic composition problem, i.e., risk-averse optimization, has been studied. A central limit theorem has been established, showing that the K-sample batch problem converges to the true problem at the rate of O(1/ K) in a proper sense. For the case where R(x) = 0, Wang et al. (2017) has proposed and analyzed a class of stochastic compositional gradient/subgradient methods (SCGD). The SCGD involves two iterations of different time scales, one for estimating x by a stochastic quasi-gradient iteration, the other for maintaining a running estimate of g(x ). Wang and Liu (2016) studies the SCGD in the setting where samples are corrupted with Markov noises (instead of i.i.d. zero-mean noises). Both works establish almost sure convergence of the algorithm and several convergence rate results, which are the best-known convergence rate prior to the current paper. The idea of using two-timescale quasi-gradient traced back to the earlier work Ermoliev (1976). It proposed for the first time a stochastic iteration that updates the solution and an auxiliary variable using two different stepsizes. The incremental treatment of proximal gradient iteration has been studied extensively for the expectation minimization problem, see for examples Nedić and Bertsekas (2001); Nemirovski et al. (2009); Wang et al. (2015); Wang and Bertsekas (2016) and references therein. However, except for Wang et al. (2017) and Wang and Liu (2016), all of these works focus on variants of the expectation minimization problem and do not apply to the stochastic composition problem (1). Another work partially related to this paper is by Dai et al. (2017). They consider a special case of problem (1) arising in kernel estimation, where they assume that all functions fv s are convex and their conjugate functions f v s can be easily obtained/sampled. Under these additional assumptions, they essentially rewrite the problem into a saddle point optimization involving functional variables. In this paper, we propose a new accelerated stochastic compositional proximal gradient (ASC-PG) method that applies to the penalized problem (1), which is a more general problem than the one considered in Wang et al. (2017). We use a coupled martingale stochastic analysis to show that ASC-PG achieves significantly better sample-error complexity in various cases. We also show that ASC-PG exhibits optimal sample-error complexity in two important special cases: the case where the outer function is linear and the case where the inner function is linear. Note that the current work has a preliminary conference version (Wang et al., 2016). Wang, Liu, and Fang The current journal version provides more complete technical details as well as important theoretical extensions (Theorem 3). Our major contributions are summarized as follows: 1. We propose the first stochastic proximal-gradient method for the stochastic composition problem. This is the first algorithm that is able to address the nonsmooth regularization penalty R( ) without deteriorating the convergence rate. 2. We obtain a convergence rate O(K 4/9) for smooth optimization problems that are not necessarily convex, where K is the number of queries to the stochastic first-order oracle. This improves the best known convergence rate and provides a new benchmark for the stochastic composition problem. 3. We provide a comprehensive analysis and results that apply to various special cases. In particular, our results contain as special cases the known optimal rate results for the expectation minimization problem, i.e., O(1/ K) for general objectives and O(1/K) for strongly convex objectives. 4. In the special case where the inner function g( ) is a linear mapping, we show that it is sufficient to use one timescale to guarantee convergence. Our result achieves the nonimprovable rate of convergence O(1/K) for optimal strongly convex optimization and O(1/ K) for convex optimization and nonconvex smooth optimization. It implies that the inner linearity does not bring fundamental difficulty to the stochastic composition problem. 5. We show that the proposed method leads to a new on-policy reinforcement learning algorithm. The new learning algorithm achieves the optimal convergence rate O(1/ K) for solving Bellman equations based on K observations of state-to-state transitions. In comparison with Wang et al. (2017), our analysis is more succinct and leads to stronger results. To the best of our knowledge, Theorems 1,2,3 in this paper provide the best-known rates for the stochastic composition problem. Paper Organization. Section 2 states the sampling oracle and the accelerated stochastic compositional proximal gradient algorithm (ASC-PG). Section 3 states the convergence rate results in the case of general nonconvex objective and in the case of strongly convex objective, respectively. Section 4 describes an application of ASC-PG to reinforcement learning and gives numerical experiments. Notations and Definitions. For x ℜn, we denote by x its transpose, and by x its Euclidean norm (i.e., x = x x). For two sequences {yk} and {zk}, we write yk = O(zk) if there exists a constant c > 0 such that yk c zk for each k. We denote by Ivalue condition the indicator function, which returns value if the condition is satisfied; otherwise 0. We denote by H the optimal objective function value of problem (1), denote by X the set of optimal solutions, and denote by PS(x) the Euclidean projection of x onto S for any convex set S. We also denote by short that f(y) = Ev[fv(y)] and g(x) = Ew[gw(x)]. Accelerating Stochastic Composition Optimization 2. Algorithm We focus on the black-box sampling environment. Suppose that we have access to a stochastic first-order oracle, which returns random realizations of first-order information upon queries. This is a typical simulation oracle that is available in both online and batch learning. More specifically, assume that we are given a Sampling Oracle (SO) such that Given some x ℜn, the SO returns a random vector gw(x) and a noisy subgradient gw(x). Given some y ℜm, the SO returns a noisy gradient fv(y). Now we propose the Accelerated Stochastic Compositional Proximal Gradient (ASC-PG) algorithm, see Algorithm 1. ASC-PG is a generalization of the SCGD proposed by Wang et al. (2017), in which a proximal step is used to replace the projection step. Algorithm 1 Accelerated Stochastic Compositional Proximal Gradient (ASC-PG) Require: x1 ℜn, SO, K, stepsize sequences {αk}K k=1, and {βk}K k=1. Ensure: {xk}K k=1 1: Initialize z1 = x1 and y1 = 0. 2: for k = 1, , K do 3: Query the SO and obtain gradient samples fvk(yk), gwk(zk). 4: Update the main iterate by xk+1 = proxαk R( ) xk αk g wk(xk) fvk(yk) . 5: Update auxillary iterates by an extrapolation-smoothing scheme: yk+1 = (1 βk)yk + βkgwk+1(zk+1), where the sample gwk+1(zk+1) is obtained via querying the SO. In Algorithm 1, the extrapolation-smoothing scheme (i.e., the (y, z)-step) is critical to the acceleration of convergence. The acceleration is due to the fast running estimation of the unknown quantity g(xk) := Ew[gw(xk)]. At iteration k, the running estimate yk of g(xk) is obtained using a weighted smoothing scheme, corresponding to the y-step; while the new query point zk+1 is obtained through extrapolation, corresponding to the z-step. The updates are constructed in a way such that yk is a nearly unbiased estimate of g(xk). To see how the extrapolation-smoothing scheme works, we let the weights be ( βt Qk i=t+1(1 βi), if k > t 0 βk, if k = t 0. (3) Wang, Liu, and Fang Then,we can verify the following important relations: t=0 ξ(k) t zt+1, yk+1 = t=0 ξ(k) t gwt+1(zt+1), which essentially say that xk+1 is a damped weighted average of {zt+1}k+1 0 and yk+1 is a damped weighted average of {gwt+1(zt+1)}k+1 0 . An Analytical Example of the Extrapolation-Smooth Scheme Now consider the special case where gw( ) is always a linear mapping gw(z) = Awz + bz and βk = 1/(k + 1). We can verify that ξ(k) t = 1/(k + 1) for all t. Then we have g(xk+1) = 1 k + 1 t=0 E[Aw]zt+1 + E[bw], yk+1 = 1 k + 1 t=0 Awt+1zt+1 + 1 k + 1 In this way, we can see that the scaled error k(yk+1 g(xk+1)) = t=0 (Awt+1 E[Aw])zt+1 + t=0 (bwt+1 E[bw]) is a zero-mean and zero-drift martingale. Under additional technical assumptions, we have E[ yk+1 g(xk+1) 2] O 1 Note that the zero-drift property of the error martingale is the key to the fast convergence rate. The zero-drift property comes from the near-unbiasedness of yk, which is due to the special construction of the extrapolation-smoothing scheme. In the more general case where gw is not necessarily linear, we can use a similar argument to show that yk is a nearly unbiased estimate of g(xk). As a result, the extrapolation-smoothing (y, z)-step ensures that yk tracks the unknown quantity g(xk) efficiently. 3. Main Results We present our main theoretical results in this section. Note that for ease of presentation, we defer all detailed proofs for the theorems, and technical lemmas to Appendix. Let us begin by stating our assumptions. Note that all assumptions involving random realizations of v, w hold with probability 1. Assumption 1 The samples generated by the SO are unbiased in the following sense: 1. E{wk,vk}( g wk(x) fvk(y)) = g (x) f(y) k = 1, 2, , K, x, y. 2. Ewk(gwk(x)) = g(x) x. Note that wk and vk are not necessarily independent. Assumption 2 The sample gradients and values generated by the SO satisfy Ew( gw(x) g(x) 2) σ2 x. Accelerating Stochastic Composition Optimization Assumption 3 The sample gradients generated by the SO are uniformly bounded, and the penalty function R has bounded gradients. fv(x) Θ(1), gw(x) Θ(1), R(x) Θ(1) x, w, v Assumption 4 There exists LF , Lf, Lg > 0 such that 1. F(z) F(x) F(x), z x + LF 2 z x 2 x z. 2. fv(y) fv(w) Lf y w y w v. 3. g(x) g(z) g(z) (x z) Lg 2 x z 2 x z. Condition 1 of Assumption 4 requires that the function F( ) is Lf-smooth. Condition 2 requires that fv has Lipschitz gradient for all v. Condition 3 requires that the function g( ) is Lg smooth. Note that the case where Lf = 0 or Lg = 0 coincides with the special case where either fv or g is linear, respectively. The constants LF , Lf, Lg jointly characterize the smoothness and complexity of stochastic composition optimization. They do not admit any straightforward dependence relation. Our first main result concerns with general optimization problems which are not necessarily convex. Theorem 1 (Smooth (Nonconvex) Optimization) Let Assumptions 1, 2, 3, and 4 hold. Denote by F(x) := (Ev(fv) Ew(gw))(x) for short and suppose that R(x) = 0 in (1) and E(F(xk)) is bounded from above. Choose αk = k a and βk = 2k b where a (0, 1) and b (0, 1) in Algorithm 1. Then we have PK k=1 E( F(xk) 2) K O(Ka 1 + L2 f Lg K4b 4a Ilog K 4a 4b=1 + L2 f K b + K a). (4) If Lg = 0 and Lf = 0, choose a = 5/9 and b = 4/9, yielding PK k=1 E( F(xk) 2) K O(K 4/9). (5) If Lg = 0 or Lf = 0, choose a = b = 1/2, yielding PK k=1 E( F(xk) 2) K O(K 1/2). (6) The result of Theorem 1 strictly improves the best-known results given by Wang et al. (2017). First the result of (5) improves the finite-sample error bound from O(k 2/7) to O(k 4/9) for general convex and nonconvex optimization. This improves the best known convergence rate and provides a new benchmark for the stochastic composition problem. Note that it is possible to relax the condition E(F(xk)) is bounded from above" in Theorem 1. However, it would make the analysis more cumbersome and yield an additional term log K in the error bound. Wang, Liu, and Fang Our second main result concerns strongly convex objective functions. We say that the objective function H is optimally strongly convex with parameter λ > 0 if H(x) H(PX (x)) λ x PX (x) 2 x. (7) (see Liu and Wright (2015)). Note that any strongly convex function is optimally strongly convex, but the reverse does not hold. For example, the objective function (2) in on-policy reinforcement learning is always optimally strongly convex (even if E(A) is a rank deficient matrix), but not necessarily strongly convex. We point out that there are other class of functions that are not strongly convex, for which first-order algorithms still achieve linear convergence. See Karimi et al. (2016) and Necoara et al. (2015) for examples. For ease of presentation, in this work, we only consider optimally strongly convex functions. Theorem 2 (Strongly Convex Optimization) Suppose that the objective function H(x) in (1) is optimally strongly convex with parameter λ > 0 defined in (7). Set αk = Cak a and βk = Cbk b where Ca > 4λ, Cb > 2, a (0, 1], and b (0, 1] in Algorithm 1. Under Assumptions 1, 2, 3, and 4, we have E( x K PX (x K) 2) O K a + L2 f Lg K 4a+4b + L2 f K b . (8) If Lg = 0 and Lf = 0, choose a = 1 and b = 4/5, yielding E( x K PX (x K) 2) O(K 4/5). (9) If Lg = 0 or Lf = 0, choose a = 1 and b = 1, yielding E( x K PX (x K) 2) O(K 1). (10) Let us discuss the results of Theorem 2. In the general case where Lf = 0 and Lg = 0, the convergence rate in (9) is consistent with the result of Wang et al. (2017). Now consider the special case where Lg = 0, i.e., the inner mapping is linear. This result finds an immediate application to Bellman error minimization problem (2) which arises from reinforcement learning problem in (and with ℓ1 norm regularization). The proposed ASC-PG algorithm is able to achieve the optimal rate O(1/K) without any extra assumption on the outer function fv. To the best of our knowledge, this is the best (also optimal) sample-error complexity for on-policy reinforcement learning. Theorem 3 (General Convex Optimization) Suppose that both F(x) and H(x) are convex, and the feasible set X is bounded. Set αk = Θ(k a) and βk = Θ(k b) with a (0, 1), and b (0, 1] in Algorithm 1. Under Assumptions 1, 2, 3, and 4, we have for any c > 0: H( x K) H O Ka 1 + Ka c Ilog K c=1 + K a (11) + L2 f Lg K 5a+4b+c Ilog K 6a 4b c=1 + L2 f K a b+c Ilog K 2a+b c=1 . Accelerating Stochastic Composition Optimization where x K := PK+1 k=2 αkxk PK+1 k=2 αk . If Lg = 0 and Lf = 0, choose a = 5/7, b = 4/7, c = 1, yielding H( x K) H O(K 2/7). (12) If Lg = 0 or Lf = 0, choose a = 1/2, b = 1, and c = 1, yielding H( x K) H O(K 1/2 log K). (13) Theorem 3 gives stronger results than the best-known results of Wang et al. (2017) and requires milder assumptions. In the general case where Lf = 0 and Lg = 0, the convergence rate in (12) matches the result of Wang et al. (2017). When either Lg = 0 (i.e., the inner mapping is linear or Lf = 0 (i.e., the outer mapping is linear), the proposed ASC-PG algorithm is able to achieve the optimal rate O(1/ K) up to a logarithmic factor. Remarks Theorems 1, 2, and 3 give important implications about the special cases where Lf = 0 or Lg = 0. In these cases, we argue that our convergence rate (10) is optimal" with respect to the sample size K. To see this, it is worth pointing out the O(1/K) rate of convergence is optimal for strongly convex expectation minimization problem. Because the expectation minimization problem is a special case of the problem (1), the O(1/K) convergence rate must be optimal for the stochastic composition problem too. Similarly, the O(1/ K) rate of convergence is also optimal for general convex optimization due to the same argument. Consider the case where Lf = 0, which means that the outer function fv( ) is linear with probability 1. Then the stochastic composition problem (1) reduces to an expectation minimization problem since (Evfv Ewgw)(x) = Ev(fv(Ewgw(x))) = Ev Ew(fv gw)(x). Therefore, it makes a perfect sense to obtain the optimal convergence rate. Consider the case where Lg = 0, which means that the inner function g( ) is a linear mapping. The result is quite surprising. Note that even g( ) is a linear mapping, it does not reduce problem (1) to an expectation minimization problem. However, the ASC-PG still achieves the optimal convergence rate. This suggests that, when inner linearity holds, the stochastic composition problem (1) is not fundamentally more difficult than the expectation minimization problem. The convergence rate results unveiled in Theorems 1 and 2 are the best known results for the composition problem. We believe that they provide important new result which provides insights into the complexity of the stochastic composition problem. 4. Application to Reinforcement Learning In this section, we apply the proposed ASC-PG algorithm to conduct policy value evaluation in reinforcement learning through attacking Bellman equations. Suppose that there are in total S states. Let the policy of interest be π. Denote the value function of states by V π ℜS, where V π(s) denotes the value of being at state s under policy π. The Bellman equation of the problem is V π(s1) = Eπ{rs1,s2 + γ V π(s2)|s1} for all s1, s2 {1, ..., S}, Wang, Liu, and Fang k 104 0 3 6 9 ASC-PG a-SCGD GTD2-MP k 104 0 3 6 9 ASC-PG a-SCGD GTD2-MP log(k) 7.5 8 8.5 log( wk w ) ASC-PG a-SCGD GTD2-MP log(k) 7.5 8 8.5 log( Φwk Φw ) ASC-PG a-SCGD GTD2-MP Figure 1: Empirical convergence rate of the ASC-PG algorithm and the GTD2-MP algorithm under Experiment 1 averaged over 100 runs, where wk denotes the solution at the k-th iteration. where rs1,s2 denotes the reward of moving from state s1 to s2, and the expectation is taken over all possible future state s2 conditioned on current state s1 and the policy π. We have that the solution V ℜS to the above equation satisfies that V = V π. Here a moderately large S will make solving the Bellman equation directly impractical. To resolve the curse of dimensionality, in many practical applications, we approximate the value of each state by some linear map of its feature φs ℜd, where d < S to reduce the dimension. In particular, we assume that V π(s) φT s w for some w ℜm. To compute w , we formulate the problem as a Bellman residual minimization problem that s=1 (φT s w qπ,s(w))2, where qπ,s(w) = Eπ{rs,s + γ φT s w} = P s P π ss ({rs,s + γ φT s w); γ < 1 is a discount factor, and rs,s is the random reward of transition from state s to state s . It is clearly seen that the proposed ASC-PG algorithm could be directly applied to solve this problem where we take g(w) = (φT 1 w, qπ,1(w), ..., φT Sw, qπ,S(w)) ℜ2S, f (φT 1 w, qπ,1(w), ..., φT Sw, qπ,S(w)) = s=1 (φsw qπ,s(w))2 ℜ. Accelerating Stochastic Composition Optimization We point out that the g( ) function here is a linear map. By our theoretical analysis, we expect to achieve a faster O(1/k) rate of convergence, which is justified empirically in our later simulation study. We consider three experiments, where in the first two experiments, we compare our proposed accelerated ASC-PG algorithm with accelerated SCGD (a-SCGD) algorithm (Wang et al., 2017) and the recently proposed GTD2-MP algorithm (Liu et al., 2015). Also, in the first two experiments, we do not add any regularization term, i.e. R( ) = 0. We point out here that in experiments 1 and 2, since there is no penalization term, i.e., R( ) = 0, the ASC-PG algorithm essentially matches the a-SCGD algorithm in Wang et al. (2017). However, note that the choice of stepsizes {αk} and {βk} of ASC-PG are different from the stepsizes of a-SCGD, where for ASC-PG αk = k 1 and βk = k 1, and for a-SCGD αk = k 1 and βk = k 4/5. In the third experiment, we add an ℓ1-penalization term λ w 1. In all cases, we choose the step sizes via comparison studies as in Dann et al. (2014): Experiment 1: We use the Baird s example (Baird, 1995), which is a well-known example to test the off-policy convergent algorithms. This example contains S = 6 states, and two actions at each state. We refer the readers to Baird (1995) for more detailed information of the example. Experiment 2: We generate a Markov decision problem (MDP) using similar setup as in White and White (2016). In each instance, we randomly generate an MDP which contains S = 100 states, and three actions at each state. The dimension of the Given one state and one action, the agent can move to one of four next possible states. In our simulation, we generate the transition probabilities for each MDP instance uniformly from [0, 1] and normalize the sum of transitions to one, and we generate the reward for each transition also uniformly in [0, 1]. Experiment 3: We generate the data same as Experiment 2 except that we have a larger d = 100 dimensional feature space, where only the first 4 components of w are non-zeros. We add an ℓ1-regularization term, λ w 1, to the objective function. Denote by wk the solution at the k-th iteration. For the first two experiments, we report the empirical convergence performance wk w and Φwk Φw , where Φ = (φ1, ..., φS)T ℜS d and Φw = V , and all wk s are averaged over 100 runs, in the first two subfigures of Figures 1 and 2. It is seen that the ASC-PG algorithm achieves the fastest convergence rate empirically in both experiments. To further evaluate our theoretical results, we plot log(t) vs. log( wk w ) (or log( Φwk Φw ) averaged over 100 runs for the first two experiments in the second two subfigures of Figures 1 and 2. The empirical results further support our theoretical analysis that wk w 2= O(1/k) for the ASC-PG algorithm when g( ) is a linear mapping. For Experiment 3, as the optimal solution is unknown, we run the ASC-PG algorithm for one million iterations and take the corresponding solution as the optimal solution ˆw , and we report wk ˆw and Φwk Φ ˆw averaged over 100 runs in Figure 3. It is seen the the ASC-PG algorithm achieves fast empirical convergence rate. Wang, Liu, and Fang k 104 0 3 6 9 ASC-PG a-SCGD GTD2-MP k 104 0 2 4 6 8 ASC-PG a-SCGD GTD2-MP log(k) 7.5 8 8.5 9 9.5 10 log( wk w ) 2 ASC-PG a-SCGD GTD2-MP log(k) 7.5 8 8.5 9 9.5 10 log( Φwk Φw ) 4 ASC-PG a-SCGD GTD2-MP Figure 2: Empirical convergence rate of the ASC-PG algorithm and the GTD2-MP algorithm under Experiment 2 averaged over 100 runs, where wk denotes the solution at the k-th iteration. k 104 0 2 4 6 8 lambda = 1e-3 lambda = 5e-4 t 104 0 2 4 6 8 lambda = 1e-3 lambda = 5e-4 Figure 3: Empirical convergence rate of the ASC-PG algorithm with the ℓ1-regularization term λ w 1 under Experiment 3 averaged over 100 runs, where wk denotes the solution at the t-th iteration. 5. Conclusion We develop the first proximal gradient method for stochastic composition optimization with nonsmooth penalties. The algorithm updates by interacting with a stochastic first-order Accelerating Stochastic Composition Optimization oracle. Finite-sample convergence rates are established under a variety of assumptions, which provide new rate benchmarks that improve the best-known results prior to this paper. Application of the ASC-PG to reinforcement learning leads to a new on-policy learning algorithm, which achieves faster convergence than the best known algorithms. For future research, it remains open whether or under what circumstances the current O(K 4/9) can be further improved. Another direction is to customize and adapt the algorithm and analysis to more specific problems arising from reinforcement learning and risk-averse optimization, in order to fully exploit the potential of the proposed method. Acknowledgments We thank the Editor, the Action Editor and two anonymous reviewers for the careful consideration and invaluable suggestions, which help us further improve the paper. This project is in part supported by NSF grants CCF1718513, CNS-1548078, DMS-10009141, CMMI-1653435, and by the National Institute on Drug Abuse grant P50 DA039838. The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Drug Abuse or the National Institutes of Health. This appendix includes all detailed proofs for the theorems in the main text. The key idea of deriving the rate convergence of our algorithm is to find a sequence of random vectors associated with the output solutions at each iteration, which converge at fast rates. To better analyze the convergence process, we have constructed two auxiliary sequences of random variables {mk} and {nk}, which are defined in the beginng of Lemma 7, that converge jointly to 0 at a fast rate. The convergence of {mk} and {nk} further imply that {xk} and {yk} converge rapidly to their limits respectively. Then we are able to improve the convergence rate of yk. Intuitively, we show that the improved iterate yk tracks the unknown quantity E[g(xk)] more closely than in the previous work. This explains why our new algorithm achieves an improved rate of convergence and sample complexity. A. Proof to Theorem 1 In this section, we provide the detailed proof for Theorem 1. We start by some technical lemmas, The first one provides a convergence result of the generated solutions, where we bound the term xk xk+1 2 as k increases. Lemma 4 Under Assumption 3, two subsequent iterates in Algorithm 1 satisfy xk xk+1 2 Θ(α2 k). Proof From the definition of the proximal operation, we have xk+1 = proxαk R( )(xk αk g wk(xk) fvk(yk)) Wang, Liu, and Fang = arg min x 1 2 x xk + αk g wk(xk) fvk(yk) 2+αk R(x). The optimality condition suggests the following equality: xk+1 xk = αk( g wk(xk) fvk(yk) + sk+1) (14) where sk+1 R(xk+1) is some vector in the sub-differential set of R( ) at xk+1. Then apply the boundedness condition in Assumption 3 to yield xk+1 xk = αk ( g wk(xk) fvk(yk) + sk+1) αk( g wk(xk) fvk(yk) + sk+1 ) αk( g wk(xk) fvk(yk) + sk+1 ) (Assumption 3) Θ(1)αk, which implies the claim. Next, we provide a simple bound which quantifies the difference between the random variable g w(x) vf(g(x)) g w(x) vf(y)) and its population version y g(x) . Lemma 5 Under Assumptions 3 and 4, we have g w(x) vf(g(x)) g w(x) vf(y)) Θ(Lf y g(x) ). Proof We have g w(x) fv(g(x)) g w(x) fv(y)) g w(x) fv(g(x)) fv(y) (Assumption 3) Θ(1) fv(g(x)) fv(y) (Assumption 4) Θ(Lf) y g(x) . It completes the proof. Then, we prove two lemmas which provides the convergence rate of E( yk g(xk) 2). Lemma 6 Given two sequences of positive scalars {sk} k=1 and {γk} k=1 satisfying sk+1 (1 γk + C1γ2 k)sk + C2k a (15) where C1 0, C2 0, and a 0. Letting γk ℜas γk = C3k b where b (0, 1] and C3 > 2, the sequence can be bounded by vk Ck c where C and c are defined as C := max k (C1C2 3)1/b+1 skkc + C2 C3 2 and c := a b. In other words, we have sk Θ(k a+b). Accelerating Stochastic Composition Optimization Proof We prove it by induction. First it is easy to verify that the claim holds for k (C1C2 3)1/b from the definition for C. Next we prove from k to k + 1 , that is, given sk Ck c for k > (C1C2 3)1/b, we need to prove sk+1 C(k + 1) c. sk+1 (1 γk + C1γ2 k)sk + C2k a (1 C3k b + C1C2 3k 2b)Ck c + C2k a = Ck c CC3k b c + CC1C2 3k 2b c + C2k a. (16) To prove that (16) is bounded by C(k + 1) c, it suffices to show that := (k + 1) c k c + C3k b c C1C2 3k 2b c > 0 and C C2k a From the convexity of function h(t) = t c, we have the inequality (k+1) c k c ( c)k c 1. Therefore we obtain ck c 1 + C3k b c C1C2 3k 2b c (b 1, k>(C1C2 3)1/b) (C3 2)(k b c) (C3>2) > 0. To verify the second one, we have C2 C3 2k a+b+c (c=a+b) = C2 C3 2 C where the last inequality is due to the definition of C. It completes the proof. Lemma 7 Choose βk to be βk = Cbk b where Cb > 2, b (0, 1], and αk = Cak a. Under Assumptions 1 and 2, we have E( yk g(xk) 2) LgΘ(k 4a+4b) + Θ(k b). (17) Proof Denote by mk+1 t=0 ξ(k) t xk+1 zt+1 2 t=0 ξ(k) t (gwt+1(zt+1) g(zt+1)) for short. From Lemma 10 in (Wang et al., 2017), we have yk g(xk) 2 Lg 2 Lgm2 k + 2n2 k. (18) Wang, Liu, and Fang From Lemma 11 in (Wang et al., 2017), mk+1 can be bounded by mk+1 (1 βk)mk + βkqk + 2 βk xk xk+1 2 (19) where qk is bounded by qk+1 (1 βk)qk + 4 βk xk+1 xk 2 (Lemma 4) (1 βk)qk + Θ(1)α2 k βk (1 βk)qk + Θ(k 2a+b). Taking sk = qk, and γk = βk, applying Lemma 6 implies the following decay rate qk Θ(k 2a+b+b) = Θ(k 2a+2b). Together with (19), we have mk+1 (1 βk)mk + βkqk + 2 βk xk xk+1 2 (1 βk)mk + Θ(k 2a+b) + Θ(k 2a+b) (1 βk)mk + Θ(k 2a+b), which leads to mk Θ(k 2a+2b) and m2 k Θ(k 4a+4b). (20) by using Lemma 6 again. Then we estimate the upper bound for E(n2 k). From Lemma 11 in (Wang et al., 2017), we know E(n2 k) is bounded by E(n2 k+1) (1 βk)2E( nk 2) + β2 kσ2 g = (1 2βk + β2 k)E( nk 2) + β2 kσ2 g. By using Lemma 6 again, we have E(n2 k) Θ(k b). (21) Now we are ready to estimate the upper bound of yk+1 g(xk+1) 2 by following (18) E( yk g(xk) 2) Lg E(m2 k) + 2E(n2 k) (20)+(21) LgΘ(k 4a+4b) + Θ(k b). It completes the proof. Now we are ready to prove Theorem 1. Proof to Theorem 1. From the Lipschitzian condition in Assumption 4, we have F(xk+1) F(xk) Accelerating Stochastic Composition Optimization F(xk), xk+1 xk + LF 2 xk+1 xk 2 (Lemma 6) αk F(xk), g wk(xk) fvk(yk) + Θ(α2 k) = αk F(xk) 2+αk F(xk), F(xk) g wk(xk) fvk(yk) | {z } =:T +Θ(α2 k) (22) Next we estimate the upper bound for E(T): E(T) = E( F(xk), F(xk) g wk(xk) fvk(g(xk)) ) +E( F(xk), g wk(xk) fvk(g(xk)) g wk(xk) fvk(yk) ) (Assumption 1) = E( F(xk), g wk(xk) fvk(g(xk)) g wk(xk) fvk(yk)) ) 1 2E( F(xk) 2) + 1 2E( g wk(xk) fvk(g(xk)) g wk(xk) fvk(yk) 2) (Lemma 5) 1 2E( F(xk) 2) + Θ(L2 f)E( yk g(xk) 2). Take expectation on both sides of (22) and substitute E(T) by its upper bound: E(F(xk)) E(F(xk+1)) + Θ(L2 fαk)E( yk g(xk) 2) + Θ(α2 k) (Lemma 7) E(F(xk)) E(F(xk+1)) + LgΘ(L2 fαk)Θ(k 4a+4b) + Θ(L2 fαkk b) + Θ(α2 k) E(F(xk)) E(F(xk+1)) + L2 f LgΘ(k 5a+4b) + L2 fΘ(k a b) + Θ(k 2a) which suggests that E( F(xk) 2) 2α 1 k E(F(xk)) 2α 1 k E(F(xk+1)) + L2 f LgΘ(k 4a+4b) + L2 fΘ(k b) + Θ(k a) 2ka E(F(xk)) 2ka E(F(xk+1)) + L2 f LgΘ(k 4a+4b) + L2 fΘ(k b) + Θ(k a). (23) Sum Eq. (23) from k = 1 to K and obtain PK k=1 E( F(xk) 2) K 2K 1α 1 1 F(x1) + K 1 K X k=2 ((k + 1)a ka)E(F(xk)) k=1 L2 f LgΘ(k 4a+4b) + K 1L2 f k=1 Θ(k b) + K 1 K X 2K 1F(x0) + K 1 K X k=2 aka 1E(F(xk)) k=1 L2 f LgΘ(k 4a+4b) + K 1L2 f k=1 Θ(k b) + K 1 K X Wang, Liu, and Fang O(Ka 1 + L2 f Lg K4b 4a Ilog K 4a 4b=1 + L2 f K b + K a), where the second inequality uses the fact that h(t) = ta is a concave function suggesting (k + 1)a ka + aka 1, and the last inequality uses the condition E(F(xk)) Θ(1). Letting a = 5/9 and b = 4/9, we obtain the desired convergence rate O(K 4/9). B. Proof to Theorem 2 In this section, we provide the detailed proof for Theorem 2. The proof is based on the following key lemma, which provides a contraction property of (E(H(xk+1)) H ) and E( xk+1 PX (xk+1) 2). Lemma 8 Assume that both F(x) and R(x) are convex. Under Assumptions 1, 2, 3, and 4, the iterates generated by Algorithm 1 satisfies for any sequence of positive scalars {φk}: 2αk(E(H(xk+1)) H ) + E( xk+1 PX (xk+1) 2) (24) (1 + φk)E( xk PX (xk) 2) + Θ(α3 k) + Θ(L2 fα2 k/φk)E( yk g(xk) 2) + Θ(α2 k). Proof Following the line of the proof to Lemma 4, we have xk+1 xk = αk( g wk(xk) fvk(yk) + sk+1) (25) where sk+1 R(xk+1) is some vector in the sub-differential set of R( ) at xk+1. Then we consider xk+1 PX (xk+1) 2: xk+1 PX (xk+1) 2 xk+1 xk + xk PX (xk) 2 = xk PX (xk) 2 xk+1 xk 2+2 xk+1 xk, xk+1 PX (xk) (25) = xk PX (xk) 2 xk+1 xk 2 2αk g wk(xk) fvk(yk) + sk+1, xk+1 PX (xk) = xk PX (xk) 2 xk+1 xk 2+2αk g wk(xk) fvk(yk), PX (xk) xk+1 +2αk sk+1, PX (xk) xk+1 xk PX (xk) 2 xk+1 xk 2+2αk g wk(xk) fvk(yk), PX (xk) xk+1 +2αk(R(PX (xk)) R(xk+1)) (due to the convexity of R( )) xk PX (xk) 2 xk+1 xk 2+2αk F(xk), PX (xk) xk+1 | {z } T1 +2αk g wk(xk) fvk(yk) F(xk), PX (xk) xk+1 | {z } T2 +2αk(R(PX (xk)) R(xk+1)) (26) where the second equality follows from a +b 2= b 2 a 2+2 a, a+b with a = xk+1 xk and b = xk PX (xk). We next estimate the upper bound for T1 and T2 respectively: T1 = F(xk), xk xk+1 + F(xk), xk + PX (xk) Accelerating Stochastic Composition Optimization F(xk) F(xk+1) + LF 2 xk+1 xk 2 | {z } due to Assumption 4 + F(PX (xk)) F(xk) | {z } due to the convexity of F( ) = F(PX (xk)) F(xk+1) + LF 2 xk+1 xk 2 F(PX (xk)) F(xk+1) + Θ(α2 k), where the last inequality uses Lemma 4. T2 = F(xk) g wk(xk) fvk(yk), xk PX (xk) + F(xk) g wk(xk) fvk(yk), xk+1 xk F(xk) g wk(xk) fvk(g(xk)), xk PX (xk) | {z } T2,1 + g wk(xk) fvk(g(xk)) g wk(xk) fvk(yk), xk PX (xk) | {z } T2,2 2 F(xk) g wk(xk) fvk(yk) 2 | {z } T2,3 2αk xk xk+1 2 where the last line is due to the inequality a, b 1 2αk a 2+αk 2 b 2. For T2,1, we have E(T2,1) = 0 due to Assumption 1. For T2,2, we have T2,2 αk 2φk g wk(xk) fvk(g(xk)) g wk(xk) fvk(yk) 2+ φk 2αk xk PX (xk) 2 (Lemma 5) Θ L2 f αk φk yk g(xk) 2+ φk 2αk xk xk+1 2. T2,3 can be bounded by a constant T2,3 2 F(xk) 2+2 g wk fvk(yk) 2(Assumption 3) Θ(1). Take expectation on T2 and put all pieces into it: E(T2) Θ L2 f αk φk yk g(xk) 2+ 1 2αk (φk xk PX (xk) 2+ xk xk+1 2) + Θ(αk). Taking expectation on both sides of (26) and plugging the upper bounds of T1 and T2 into it, we obtain 2αk(E(H(xk+1)) H ) + E( xk+1 PX (xk+1) 2) (1 + φk)E( xk PX (xk) 2) + Θ(α3 k) + Θ(L2 fα2 k/φk)E( yk g(xk) 2) + Θ(α2 k), which completes the proof. Wang, Liu, and Fang Then, we prove the Theorem 2 based on the previous lemma. Proof to Theorem 2 Apply the optimally strong convexity in (7) to Lemma 8, yielding (1 + 2λαk)E( xk+1 PX (xk+1) 2) (1 + φk)E( xk PX (xk) 2) + Θ(α3 k) + Θ(L2 fα2 k/φk)E( yk g(xk) 2) + Θ(α2 k). It follows by dividing 1 + 2λαk on both sides E( xk+1 PX (xk+1) 2) 1 + φk 1 + 2λαk E( xk PX (xk) 2) + Θ(α3 k) + Θ(L2 fα2 k/φk)E( yk g(xk) 2) + Θ(α2 k). Choosing φk = λαk 2λ2α2 k 0.5λαk yields E( xk+1 PX (xk+1) 2) (1 λαk)E( xk PX (xk) 2) + Θ(α2 k) + Θ(L2 fαk) λ E( g(xk) yk 2) (1 λαk)E( xk PX (xk) 2) + Θ(k 2a) + Θ(Lg L2 fk 5a+4b + L2 fk a b). Apply Lemma 6 and substitute the subscript k by K to obtain the first claim in (8) E( x K PX (x K) 2) O K a + L2 f Lg K 4a+4b + L2 f K b . The followed specification of a and b can easily verified. C. Proof to Theorem 3 In what follows, we provide the detailed proof to Theorem 3. Proof to Theorem 3 Let Pk = 2αk(E(H(xk+1)) H ) + E( xk+1 PX (xk+1) 2), which is the left hand side of inequality (24). Summing Pk up from k = 1 to k = K and taking φk = Θ(k c) yields k=1 αk(E(H(xk+1)) H ) + E( x K+1 PX (x K+1) 2) (1 + φ1)E( x1 PX (x1) 2) + k=2 φk E( xk PX (xk) 2) k=2 Θ(L2 fα2 k/φk)E( yk g(xk) 2) + Θ(α2 k) + Θ(K1 2a) + k=2 Θ(L2 fα2 k/φk)E( yk g(xk) 2) Accelerating Stochastic Composition Optimization Θ(1) + Θ K1 c Ilog K c=1 + O(K1 2a) + k=2 Θ(L2 fα2 k/φk)E( yk g(xk) 2). (27) Note that the second inequality holds by the condition that the feasible set X is bounded, and thus xk PX (xk) 2 is bounded. We continue to bound the last term on the right hand side of (27): k=2 Θ(L2 fα2 k/φk)E( yk g(xk) 2) k=2 L2 f LgΘ(k 6a+4b c) + L2 fΘ(k 2a b c) L2 f LgΘ(K1 6a+4b c Ilog K 6a 4b c=1) + L2 fΘ(K1 2a b c Ilog K 2a+b c=1). (28) Plugging (28) into (27) and dividing 2 PK k=1 αk on both sides: 2 PK k=1 αk H(xk+1) H 2 PK k=1 αk Θ(1) + Θ K1 c Ilog K c=1 + Θ(K1 2a) 2 PK k=1 αk + L2 f LgΘ(K1 6a+4b+c Ilog K 6a 4b c=1) + L2 fΘ(K1 2a b+c Ilog K 2a b+c=1) 2 PK k=1 αk O(Ka 1) + O(Ka c Ilog K c=1 ) + O(K a) +O(L2 f Lg K 5a+4b+c Ilog K 6a 4b c=1) + O(L2 f K a b+c Ilog K 2a+b c=1) = O Ka 1 + Ka c Ilog K c=1 + K a + L2 f Lg K 5a+4b+c Ilog K 6a 4b c=1 + L2 f K a b+c Ilog K 2a+b c=1 . where the last inequality uses the fact PK k=1 αk Θ(K1 a). To finish the proof of (11), we use Jensen s inequality 2 PK k=1 αk H(xk+1) 2 PK k=1 αk H( x K). L Baird. Residual algorithms: Reinforcement learning with function approximation. In International Conference on Machine Learning (ICML), pages 30 37, 1995. B. Dai, N. He, Y. Pan, B. Boots, and L. Song. Learning from conditional distributions via dual embeddings. In Artificial Intelligence and Statistics (AISTATS), pages 1458 1467, 2017. Wang, Liu, and Fang C. Dann, G. Neumann, and J. Peters. Policy evaluation with temporal differences: A survey and comparison. The Journal of Machine Learning Research, 15(1):809 883, 2014. D. Dentcheva, S. Penev, and A. Ruszczyński. Statistical estimation of composite risk functionals and risk optimization problems. Annals of the Institute of Statistical Mathematics, pages 1 24, 2016. Y. M. Ermoliev. Methods of Stochastic Programming. Monographs in Optimization and OR, Nauka, Moscow, 1976. Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (KDD), pages 795 811. Springer, 2016. B. Liu, J. Liu, M. Ghavamzadeh, S. Mahadevan, and M. Petrik. Finite-sample analysis of proximal gradient td algorithms. In Conference on Uncertainty in Artificial Intelligence (UAI), 2015. J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351 376, 2015. I Necoara, Yu Nesterov, and F Glineur. Linear convergence of first order methods for non-strongly convex optimization. ar Xiv preprint ar Xiv:1504.06298, 2015. A. Nedić and D. P. Bertsekas. Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization, 12:109 138, 2001. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19:1574 1609, 2009. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT press, 1998. M. Wang and D. P. Bertsekas. Stochastic first-order methods with random constraint projection. SIAM Journal on Optimization, 26(1):681 717, 2016. M. Wang and J. Liu. A stochastic compositional subgradient method using Markov samples. Winter Simulation Conference, 2016. M. Wang, Y. Chen, J. Liu, and Y. Gu. Random multi-constraint projection: Stochastic gradient methods for convex optimization with many constraints. ar Xiv preprint ar Xiv:1511.03760, 2015. M. Wang, J. Liu, and E. X. Fang. Accelerating stochastic composition optimization. In Advances in Neural Information Processing Systems (NIPS), pages 1714 1722, 2016. M. Wang, E. X. Fang, and H. Liu. Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions. Mathematical Programming, 161 (1-2):419 449, 2017. Accelerating Stochastic Composition Optimization A. White and M. White. Investigating practical linear temporal difference learning. In International Conference on Autonomous Agents & Multiagent Systems (AAMAS), pages 494 502, 2016.