# optimizing_the_cvar_via_sampling__c39a8e40.pdf Optimizing the CVa R via Sampling Aviv Tamar, Yonatan Glassner, and Shie Mannor Electrical Engineering Department The Technion - Israel Institute of Technology Haifa, Israel 32000 {avivt, yglasner}@tx.technion.ac.il, shie@ee.technion.ac.il Conditional Value at Risk (CVa R) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVa R in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the gradient of the CVa R, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVa R optimum. Our method allows to consider CVa R optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risksensitive controller for the game of Tetris. 1 Introduction Conditional Value at Risk (CVa R; Rockafellar and Uryasev, 2000) is an established risk measure that has found extensive use in finance among other fields. For a random payoff R, whose distribution is parameterized by a controllable parameter θ, the α-CVa R is defined as the expected payoff over the α% worst outcomes of Z: Φ(θ) = Eθ [R| R να(θ)] , where να(θ) is the α-quantile of R. CVa R optimization aims to find a parameter θ that maximizes Φ(θ). When the payoff is of the structure R = fθ(X), where fθ is a deterministic function, and X is random but does not depend on θ, CVa R optimization may be formulated as a stochastic program, and solved using various approaches (Rockafellar and Uryasev 2000; Hong and Liu 2009; Iyengar and Ma 2013). Such a payoff structure is appropriate for certain domains, such as portfolio optimization, in which the investment strategy generally does not affect the asset prices. However, in many important domains, for example queueing systems, resource allocation, and reinforcement learning, the tunable parameters also control the distribution of the random outcomes. Since existing CVa R optimization methods are not suitable for such cases, and due to increased interest in risk-sensitive optimization recently in these domains (Tamar, Di Castro, and Mannor 2012; Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Prashanth and Ghavamzadeh 2013), there is a strong incentive to develop more general CVa R optimization algorithms. In this work, we propose a CVa R optimization approach that is applicable when θ also controls the distribution of X. The basis of our approach is a new formula that we derive for the CVa R gradient Φ(θ) θ in the form of a conditional expectation. Based on this formula, we propose a samplingbased estimator for the CVa R gradient, and use it to optimize the CVa R by stochastic gradient descent. In addition, we analyze the bias of our estimator, and use the result to prove convergence of the stochastic gradient descent algorithm to a local CVa R optimum. Our method allows us to consider CVa R optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risk-sensitive controller for the game of Tetris. To our knowledge, CVa R optimization for such a domain is beyond the reach of existing approaches. Considering Tetris also allows us to easily interpret our results, and show that we indeed learn sensible policies. We remark that in certain domains, CVa R is often not maximized directly, but used as a constraint in an optimization problem of the form maxθ Eθ[R] s.t. Φ(θ) b. Extending our approach to such problems is straightforward, using standard penalty method techniques (see, e.g., Tamar, Di Castro, and Mannor, 2012, and Prashanth and Ghavamzadeh, 2013, for a such an approach with a varianceconstrained objective), since the key component for these methods is the CVa R gradient estimator we provide here. Another appealing property of our estimator is that it naturally incorporates importance sampling, which is important when α is small, and the CVa R captures rare events. Related Work Our approach is similar in spirit to the likelihood-ratio method (LR; Glynn, 1990), that estimates the gradient of the expected payoff. The LR method has been successfully applied in diverse domains such as queueing systems, inventory management, and financial engineering (Fu 2006), and also in reinforcement learning (RL; Sutton and Barto, 1998), where it is commonly known as the policy gradient method (Baxter and Bartlett 2001; Peters and Schaal 2008). Our work extends the LR method to estimating the gradient of the CVa R of the payoff. Closely related to our work are the studies of Hong and Liu (2009) and Scaillet (2004), who proposed perturbation Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence analysis style estimators for the gradient of the CVa R, for the setting mentioned above, in which θ does not affect the distribution of X. Indeed, their gradient formulae are different than ours, and do not apply in our setting. LR gradient estimators for other risk measures have been proposed by Borkar (2001) for exponential utility functions, and by Tamar, Di Castro, and Mannor (2012) for mean variance. These measures, however, consider a very different notion of risk than the CVa R. For example, the mean variance measure is known to underestimate the risk of rare, but catastrophic events (Agarwal and Naik 2004). Risk-sensitive optimization in RL is receiving increased interest recently. A mean-variance criterion was considered by Tamar, Di Castro, and Mannor (2012) and Prashanth and Ghavamzadeh (2013). Morimura et al. (2010) consider the expected return, with a CVa R based risk-sensitive policy for guiding the exploration while learning. Their method, however, does not scale to large problems. Borkar and Jain (2014) optimize a CVa R constrained objective using dynamic programming, by augmenting the state space with the accumulated reward. As such, that method is only suitable for a finite horizon and a small state-space, and does not scale-up to problems such as the Tetris domain we consider. A function approximation extension of (Borkar and Jain 2014) is mentioned, using a three time scales stochastic approximation algorithm. In that work, three different learning rates are decreased to 0, and convergence is determined by the slowest one, leading to an overall slow convergence. In contrast, our approach requires only a single learning rate. Recently, Prashanth (2014) used our gradient formula of Proposition 2 (from a preliminary version of this paper) in a two time-scale stochastic approximation scheme to show convergence of CVa R optimization. Besides providing the theoretical basis for that work, our current convergence result (Theorem 5) obviates the need for the extra time-scale, and results in a simpler and faster algorithm. 2 A CVa R Gradient Formula In this section we present a new LR-style formula for the gradient of the CVa R. This gradient will be used in subsequent sections to optimize the CVa R with respect to some parametric family. We start with a formal definition of the CVa R, and then present a CVa R gradient formula for 1dimensional random variables. We then extend our result to the multi-dimensional case. Let Z denote a random variable with a cumulative distribution function (C.D.F.) FZ(z) = Pr(Z z). For convenience, we assume that Z is a continuous random variable, meaning that FZ(z) is everywhere continuous. We also assume that Z is bounded. Given a confidence level α (0, 1), the α-Value-at-Risk, (Va R; or α-quantile) of Z is denoted να(Z), and given by να(Z) = F 1 Z (α) .= inf {z : FZ(z) α} . (1) The α-Conditional-Value-at-Risk of Z is denoted by Φα(Z) and defined as the expectation of the α fraction of the worst outcomes of Z Φα(Z) = E [Z| Z να(Z)] . (2) We next present a formula for the sensitivity of Φα(Z) to changes in FZ(z). 2.1 CVa R Gradient of a 1-Dimensional Variable Consider again a random variable Z, but now let its probability density function (P.D.F.) f Z(z; θ) be parameterized by a vector θ Rk. We let να(Z; θ) and Φα(Z; θ) denote the Va R and CVa R of Z as defined in Eq. (1) and (2), when the parameter is θ, respectively. We are interested in the sensitivity of the CVa R to the parameter vector, as expressed by the gradient θj Φα(Z; θ). In all but the most simple cases, calculating the gradient analytically is intractable. Therefore, we derive a formula in which θj Φα(Z; θ) is expressed as a conditional expectation, and use it to calculate the gradient by sampling. For technical convenience, we make the following assumption: Assumption 1. Z is a continuous random variable, and bounded in [ b, b] for all θ. We also make the following smoothness assumption on να(Z; θ) and Φα(Z; θ) Assumption 2. For all θ and 1 j k, the gradients να(Z;θ) θj and Φα(Z;θ) θj exist and are bounded. Note that since Z is continuous, Assumption 2 is satisfied whenever θj f Z(z; θ) is bounded. Relaxing Assumptions 1 and 2 is possible, but involves technical details that would complicate the presentation, and is left to future work. The next assumption is standard in LR gradient estimates Assumption 3. For all θ, z, and 1 j k, we have that f Z(z;θ) θj /f Z(z; θ) exists and is bounded. In the next proposition we present a LR-style sensitivity formula for Φα(Z; θ), in which the gradient is expressed as a conditional expectation. In Section 3 we shall use this formula to suggest a sampling algorithm for the gradient. Proposition 1. Let Assumptions 1, 2, and 3 hold. Then Φα(Z;θ) θj =Eθ logf Z(Z; θ) θj (Z να(Z; θ)) Z να(Z;θ) . Proof. Define the level-set Dθ = {z [ b, b] : z να(Z; θ)} . By definition, Dθ [ b, να(Z; θ)], and R z Dθ f Z (z; θ) dz = α. Taking a derivative and using the Leibniz rule we obtain b f Z (z; θ) dz = Z να(Z;θ) θj dz + να(Z; θ) θj f Z (να(Z; θ); θ) . By definition (2) we have Φα(Z; θ) = R z Dθ f Z(z;θ)z α 1 R να(Z;θ) b f Z (z; θ) zdz. Now, taking a derivative and using the Leibniz rule we obtain θj Φα(Z; θ) =α 1 Z να(Z;θ) +α 1 να(Z; θ) θj f Z(να(Z; θ); θ) να(Z; θ). Rearranging, and plugging (3) in (4) we obtain θj Φα(Z; θ) = α 1 R να(Z;θ) b f Z(z;θ) θj (z να(Z; θ)) dz. Finally, using the likelihood ratio trick multiplying and dividing by f Z (z; θ) inside the integral, which is justified due to Assumption 3, we obtain the required expectation. Let us contrast the CVa R LR formula of Proposition 1 with the standard LR formula for the expectation (Glynn 1990) θj Eθ[Z] = Eθ h log f Z(Z;θ) θj (Z b) i , where the baseline b could be any arbitrary constant. Note that in the CVa R case the baseline is specific, and, as seen in the proof, accounts for the sensitivity of the level-set Dθ. Quite surprisingly, this specific baseline turns out to be exactly the Va R, να(Z; θ), which, as we shall see later, also leads to an elegant sampling based estimator. In a typical application, Z would correspond to the performance of some system, such as the profit in portfolio optimization, or the total reward in RL. Note that in order to use Proposition 1 in a gradient estimation algorithm, one needs access to θj log f Z(Z; θ): the sensitivity of the performance distribution to the parameters. Typically, the system performance is a complicated function of a high-dimensional random variable. For example, in RL and queueing systems, the performance is a function of a trajectory from a stochastic dynamical system, and calculating its probability distribution is usually intractable. The sensitivity of the trajectory distribution to the parameters, however, is often easy to calculate, since the parameters typically control how the trajectory is generated. We shall now generalize Proposition 1 to such cases. The utility of this generalization is further exemplified in Section 5, for the RL domain. 2.2 CVa R Gradient Formula General Case Let X = (X1, X2, . . . , Xn) denote an n dimensional random variable with a finite support [ b, b]n, and let Y denote a discrete random variable taking values in some countable set Y. Let f Y (y; θ) denote the probability mass function of Y , and let f X|Y (x|y; θ) denote the probability density function of X given Y . Let the reward function r be a bounded mapping from [ b, b]n Y to R, and consider the random variable R .= r(X, Y ). We are interested in a formula for θj Φα(R; θ). We make the following assumption, similar to Assumptions 1, 2, and 3. Assumption 4. The reward R is a continuous random variable for all θ. Furthermore, for all θ and 1 j k, the gradients θj να(R; θ) and θj Φα(R; θ) are well defined and bounded. In addition log f X|Y (x|y;θ) θj and log f Y (y;θ) θj exist and are bounded for all x, y, and θ. Define the level-set Dy;θ = {x [ b, b]n : r(x, y) να(R; θ)} . We require some smoothness of the function r, that is captured by the following assumption on Dy;θ. Assumption 5. For all y and θ, the set Dy;θ may be written as a finite sum of Ly;θ disjoint, closed, and con- nected components Di y;θ, each with positive measure: Dy;θ = PLy;θ i=1 Di y;θ. Assumption 5 may satisfied, for example, when r(x, y) is Lipschitz in x for all y Y. We now present a sensitivity formula for Φα(R; θ). Proposition 2. Let Assumption 4 and 5 hold. Then θj Φα(R; θ) = Eθ log f Y (Y ; θ) log f X|Y (X|Y ; θ) (R να(R; θ)) R να(R; θ) . The proof of Proposition 2 is similar in spirit to the proof of Proposition 1, but involves some additional difficulties of applying the Leibnitz rule in a multidimensional setting. It is given in (Tamar, Glassner, and Mannor 2014). We reiterate that relaxing Assumptions 4 and 5 is possible, but is technically involved, and left for future work. In the next section we show that the formula in Proposition 2 leads to an effective algorithm for estimating θj Φα(R; θ) by sampling. 3 A CVa R Gradient Estimation Algorithm The sensitivity formula in Proposition 2 suggests a natural Monte Carlo (MC) estimation algorithm. The method, which we label GCVa R (Gradient estimator for CVa R), is described as follows. Let x1, y1 . . . , x N, y N be N samples drawn i.i.d. from f X,Y (x, y; θ), the joint distribution of X and Y . We first estimate να(R; θ) using the empirical αquantile1 v v = inf z ˆF(z) α, (5) where ˆF(z) is the empirical C.D.F. of R: ˆF(z) .= 1 N PN i=1 1r(xi,yi) z. The MC estimate of the gradient j;N θj Φα(R; θ) is given by log f Y (yi; θ) θj + log f X|Y (xi|yi; θ) (r(xi, yi) v) 1r(xi,yi) v. (6) It is known that the empirical α-quantile is a biased estimator of να(R; θ). Therefore, j;N is also a biased estimator of θj Φα(R; θ). In the following we analyze and bound this bias. We first show that j;N is a consistent estimator. The proof is similar to the proof of Theorem 4.1 in (Hong and Liu 2009), and given in (Tamar, Glassner, and Mannor 2014). Theorem 3. Let Assumption 4 and 5 hold. Then j;N θj Φα(R; θ) w.p. 1 as N . With an additional smoothness assumption we can explicitly bound the bias. Let f R( ; θ) denote the P.D.F. of R, and define the function g(β; θ) .= Eθh log f Y (Y ;θ) θj + log f X|Y (X|Y ;θ) (R να(R; θ)) R = β i . 1Algorithmically, this is equivalent to first sorting the r(xi, yi) s in ascending order, and then selecting v as the αN term in the sorted list. Algorithm 1 GCVa R 1: Given: CVa R level α A reward function r(x, y) : Rn Y R Derivatives θj of the probability mass function f Y (y; θ) and probability density function f X|Y (x|y; θ) An i.i.d. sequence x1, y1, . . . , x N, y N f X,Y (x, y; θ). 2: Set rs 1, . . . , rs N = Sort (r(x1, y1), . . . , r(x N, y N)) 3: Set v = rs αN 4: For j = 1, . . . , k do log f Y (yi; θ) θj + log f X|Y (xi|yi; θ) (r(xi, yi) v) 1r(xi,yi) v 5: Return: 1;N, . . . , k;N Assumption 6. For all θ, f R( ; θ) and g( ; θ) are continuous at να(R; θ), and f R(να(R; θ); θ) > 0. Assumption 6 is similar to Assumption 4 of (Hong and Liu 2009), and may be satisfied, for example, when log f X|Y (x|y;θ) θj is continuous and r(x, y) is Lipschitz in x. The next theorem shows that the bias is O(N 1/2). The proof, given in (Tamar, Glassner, and Mannor 2014), is based on separating the bias to a term that is bounded using a result of Hong and Liu (2009), and an additional term that is bounded using well-known results for the bias of empirical quantiles. Theorem 4. Let Assumptions 4, 5, and 6 hold. Then E [ j;N] θj Φα(R; θ) is O(N 1/2). At this point, let us again contrast GCVa R with the standard LR method. One may naively presume that applying a standard LR gradient estimator to the α% worst samples would work as a CVa R gradient estimator. This corresponds to applying the GCVa R algorithm without subtracting the v baseline from the reward in (6). Theorems 3 and 4 show that such an estimator would not be consistent. In fact, in (Tamar, Glassner, and Mannor 2014) we give an example where the gradient error of such an approach may be arbitrarily large. In the sequel, we use GCVa R as part of a stochastic gradient descent algorithm for CVa R optimization. An asymptotically decreasing gradient bias, as may be established from Theorem 3, is necessary to guarantee convergence of such a procedure. Furthermore, the bound of Theorem 4 will allow us to quantify how many samples are needed at each iteration for such convergence to hold. Variance Reduction by Importance Sampling For very low quantiles, i.e., α close to 0, the GCVa R estimator would suffer from a high variance, since the averaging is effectively only over αN samples. This is a well-known issue in sampling based approaches to Va R and CVa R estimation, and is often mitigated using variance reduction tech- niques such as Importance Sampling (IS; Rubinstein and Kroese, 2011; Bardou, Frikha, and Pag es, 2009). In IS, the variance of a MC estimator is reduced by using samples from a different sampling distribution, and suitably modifying the estimator to keep it unbiased. It is straightforward to incorporate IS into LR gradient estimators in general, and to our GCVa R estimator in particular. Due to space constraints, and since this is fairly standard textbook material (e.g., Rubinstein and Kroese, 2011), we provide the full technical details in (Tamar, Glassner, and Mannor 2014). In our experiments we show that IS indeed improves performance significantly. 4 CVa R Optimization In this section, we consider the setting of Section 2.2, and aim to solve the CVa R optimization problem: max θ Rk Φα(R; θ). (7) For this goal we propose CVa RSGD: a stochastic gradient descent algorithm, based on the GCVa R gradient estimator. We now describe the CVa RSGD algorithm in detail, and show that it converges to a local optimum of (7). In CVa RSGD, we start with an arbitrary initial parameter θ0 Rk. The algorithm proceeds iteratively as follows. At each iteration i of the algorithm, we first sample ni i.i.d. realizations x1, y1, . . . , xni, yni of the random variables X and Y , from the distribution f X,Y (x, y; θi). We then apply the GCVa R algorithm to obtain an estimate j;ni of θj Φα(R; θi), using the samples x1, y1, . . . , xni, yni. Finally, we update the parameter according to θi+1 j = Γ θi j + ϵi j;ni , (8) where ϵi is a positive step size, and Γ : Rk Rk is a projection to some compact set Θ with a smooth boundary. The purpose of the projection is to facilitate convergence of the algorithm, by guaranteeing that the iterates remain bounded (this is a common stochastic approximation technique; Kushner and Yin, 2003). In practice, if Θ is chosen large enough so that it contains the local optima of Φα(R; θ), the projection would rarely occur, and would have a negligible effect on the algorithm. Let ˆΓθ(ν) .= limδ 0 Γ(θ+δν) θ δ denote an operator that, given a direction of change ν to the parameter θ, returns a modified direction that keeps θ within Θ. Consider the following ordinary differential equation: θ = ˆΓθ ( Φα(R; θ)) , θ(0) Θ. (9) Let K denote the set of all asymptotically stable equilibria of (9). The next theorem shows that under suitable technical conditions, the CVa RSGD algorithm converges to K almost surely. The theorem is a direct application of Theorem 5.2.1 of Kushner and Yin (2003), and given here without proof. Theorem 5. Consider the CVa RSGD algorithm (8). Let Assumptions 4, 5, and 6 hold, and assume that Φα(R; θ) is continuously differentiable in θ. Also, assume that P i=1 ϵi = , P i=1 ϵ2 i < , and that P i=1 ϵi E [ j;ni] θj Φα(R; θi) < w.p. 1 for all j. Then θi K almost surely. Note that from the discussion in Section 3, the requirement P i=1 ϵi E [ j;ni] θj Φα(R; θi) < implies that we must have limi ni = . However, the rate of ni could be very slow, for example, using the bound of Theorem 4 the requirement may be satisfied by choosing ϵi = 1/i and ni = (log i)4. 5 Application to Reinforcement Learning In this section we show that the CVa RSGD algorithm may be used in an RL policy-gradient type scheme, for optimizing performance criteria that involve the CVa R of the total return. We first describe some preliminaries and our RL setting, and then describe our algorithm. We consider an episodic2 Markov Decision Problem (MDP) in discrete time with a finite state space S and a finite action space A. At time t {0, 1, 2, . . . } the state is st, and an action at is chosen according to a parameterized policy πθ, which assigns a distribution over actions fa|h(a|h; θ) according to the observed history of states ht = s0, . . . , st. Then, an immediate random reward ρt fρ|s,a(ρ|s, a) is received, and the state transitions to st+1 according to the MDP transition probability fs |s,a(s |s, a). We denote by ζ0 the initial state distribution and by s a terminal state, and we assume that for all θ, s is reached w.p. 1. For some policy πθ, let s0, a0, ρ0, s1, a1, ρ1, . . . , sτ denote a state-action-reward trajectory from the MDP under that policy, that terminates at time τ, i.e., sτ = s . The trajectory is a random variable, and we decompose3 it into a discrete part Y .= s0, a0, s1, a1, . . . , s and a continuous part X .= ρ0, ρ1, . . . , ρτ 1. Our quantity of interest is the total reward along the trajectory R .= Pτ t=0 ρt. In standard RL, the objective is to find the parameter θ that maximizes the expected return V (θ) = Eθ [R]. Policy gradient methods (Baxter and Bartlett 2001; Marbach and Tsitsiklis 1998; Peters and Schaal 2008) use simulation to estimate V (θ)/ θj, and then perform stochastic gradient ascent on the parameters θ. In this work we are risk-sensitive, and our goal is to maximize the CVa R of the total return J(θ) .= Φα(R; θ). In the spirit of policy gradient methods, we estimate J(θ)/ θj from simulation, using GCVa R, and optimize θ using CVa RSGD. We now detail our approach. First, it is well known (Marbach and Tsitsiklis 1998) that by the Markov property of the state transitions: log f Y (Y ; θ) / θ = t=0 log fa|h(at|ht; θ)/ θ. (10) Also, note that in our formulation we have log f X|Y (xi|yi; θ) / θ = 0, (11) since the reward does not depend on θ directly. To apply CVa RSGD in the RL setting, at each iteration i of the algorithm we simulate ni trajectories 2Also known as a stochastic shortest path (Bertsekas 2012). 3This decomposition is not restrictive, and used only to illustrate the definitions of Section 2. One may alternatively consider a continuous state space, or discrete rewards, so long as Assumptions 4, 5, and 6 hold. x1, y1, . . . , xni, yni of the MDP using policy πθi (each xk and yk here together correspond to a single trajectory, as realizations of the random variables X and Y defined above). We then apply the GCVa R algorithm to obtain an estimate j;ni of J(θ)/ θj, using the simulated trajectories x1, y1, . . . , xni, yni, Eq. (10), and Eq. (11). Finally, we update the policy parameter according to Eq. (8). Note that due to Eq. (10), the transition probabilities of the MDP, which are generally not known to the decision maker, are not required for estimating the gradient using GCVa R. Only policy-dependent terms are required. We should remark that for the standard RL criterion V (θ), a Markov policy that depends only on the current state suffices to achieve optimality (Bertsekas 2012). For the CVa R criterion this is not necessarily the case. B auerle and Ott (2011) show that under certain conditions, an augmentation of the current state with a function of the accumulated reward suffices for optimality. In our simulations, we used a Markov policy, and still obtained useful and sensible results. Assumptions 4, 5, and 6, that are required for convergence of the algorithm, are reasonable for the RL setting, and may be satisfied, for example, when fρ|s,a(ρ|s, a) is smooth, and log fa|h(a|h; θ)/ θj is well defined and bounded. This last condition is standard in policy gradient literature, and a popular policy representation that satisfies it is softmax action selection (Sutton et al. 2000; Marbach and Tsitsiklis 1998), given by fa|h(a|h; θ) = exp(φ(h,a) θ) P a exp(φ(h,a ) θ), where φ(h, a) Rk are a set of k features that depend on the history and action. In some RL domains, the reward takes only discrete values. While this case is not specifically covered by the theory in this paper, one may add an arbitrarily small smooth noise to the total reward for our results to hold. Since such a modification has negligible impact on performance, this issue is of little importance in practice. In our experiments the reward was discrete, and we did not observe any problem. 5.1 Experimental Results We examine Tetris as a test case for our algorithms. Tetris is a popular RL benchmark that has been studied extensively. The main challenge in Tetris is its large state space, which necessitates some form of approximation in the solution technique. Many approaches to learning controllers for Tetris are described in the literature, among them are approximate value iteration (Tsitsiklis and Van Roy 1996), policy gradients (Kakade 2001; Furmston and Barber 2012), and modified policy iteration (Gabillon, Ghavamzadeh, and Scherrer 2013). The standard performance measure in Tetris is the expected number of cleared lines in the game. Here, we are interested in a risk-averse performance measure, captured by the CVa R of the total game score. Our goal in this section is to compare the performance of a policy optimized for the CVa R criterion versus a policy obtained using the standard policy gradient method. As we will show, optimizing the CVa R indeed produces a different policy, characterized by a risk-averse behavior. We note that at present, the best results in the literature (for the standard performance measure) were obtained using a modified policy iteration Figure 1: GCVa R vs. policy gradient. (A,B) Average return (A) and CVa R (α = 0.05) of the return (B) for CVa RSGD and standard policy-gradient vs. iteration. (C) Histogram (counts from 10,000 independent runs) of the total return of the final policies. The lower plot is a zoom-in on the left-tail, and clearly shows the risk-averse behavior of the CVa RSGD policy. (D) Final policy parameters. Note the difference in the Board Well feature, which encourages risk taking. (E) CVa R (α = 0.01) of the return for CVa RSGD vs. iteration, with and without importance sampling. approach (Gabillon, Ghavamzadeh, and Scherrer 2013), and not using policy gradients. We emphasize that our goal here is not to compete with those results, but rather to illustrate the application of CVa RSGD. We do point out, however, that whether the approach of Gabillon, Ghavamzadeh, and Scherrer (2013) could be extended to handle a CVa R objective is currently not known. We used the regular 10 20 Tetris board with the 7 standard shapes (a.k.a. tetrominos). In order to induce risksensitive behavior, we modified the reward function of the game as follows. The score for clearing 1,2,3 and 4 lines is 1,4,8 and 16 respectively. In addition, we limited the maximum number of steps in the game to 1000. These modifications strengthened the difference between the risk-sensitive and nominal policies, as they induce a tradeoff between clearing many single lines with a low profit, or waiting for the more profitable, but less frequent, batches . We used the softmax policy, with the feature set of Thiery and Scherrer (2009). Starting from a fixed policy parameter θ0, which was obtained by running several iterations of standard policy gradient (giving both methods a warm start ), we ran both CVa RSGD and standard policy gradient4 for enough iterations such that both algorithms (approximately) converged. We set α = 0.05 and N = 1000. In Fig. 1A and Fig. 1B we present the average return V (θ) and CVa R of the return J(θ) for the policies of both algorithms at each iteration (evaluated by MC on independent trajectories). Observe that for CVa RSGD, the average return has been compromised for a higher CVa R value. This compromise is further explained in Fig. 1C, where we display the reward distribution of the final policies. It may be observed that the left-tail distribution of the CVa R policy is significantly lower than the standard policy. For the risk-sensitive decision maker, such results are very important, especially if the left-tail contains catastrophic outcomes, as is common in many real-world domains, such as finance. To better understand the differences between 4Standard policy gradient is similar to CVa RSGD when α = 1. However, it is common to subtract a baseline from the reward in order to reduce the variance of the gradient estimate. In our experiments, we used the average return < r > as a baseline, and our gradient estimate was 1 N PN i=1 log f Y (yi;θ) θj (r(xi, yi) < r >). the policies, we compare the final policy parameters θ in Fig. 1D. The most significant difference is in the parameter that corresponds to the Board Well feature. A well is a succession of unoccupied cells in a column, such that their left and right cells are both occupied. The controller trained by CVa RSGD has a smaller negative weight for this feature, compared to the standard controller, indicating that actions which create deep-wells are repressed. Such wells may lead to a high reward when they get filled, but are risky as they heighten the board. To demonstrate the importance of IS in optimizing the CVa R when α is small, we chose α = 0.01, and N = 200, and compared CVa RSGD against its IS version, IS CVa RSGD, described in (Tamar, Glassner, and Mannor 2014). As Fig. 1E shows, IS GCVa RSGD converged significantly faster, improving the convergence rate by more than a factor of 2. The full details are provided in (Tamar, Glassner, and Mannor 2014). 6 Conclusion and Future Work We presented a novel LR-style formula for the gradient of the CVa R performance criterion. Based on this formula, we proposed a sampling-based gradient estimator, and a stochastic gradient descent procedure for CVa R optimization that is guaranteed to converge to a local optimum. To our knowledge, this is the first extension of the LR method to the CVa R performance criterion, and our results extend CVa R optimization to new domains. We evaluated our approach empirically in an RL domain: learning a risk-sensitive policy for Tetris. To our knowledge, such a domain is beyond the reach of existing CVa R optimization approaches. Moreover, our empirical results show that optimizing the CVa R indeed results in useful risksensitive policies, and motivates the use of simulation-based optimization for risk-sensitive decision making. Acknowledgments The authors thank Odalric-Ambrym Maillard for many helpful discussions. The research leading to these results has received funding from the European Research Council under the European Union s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n. 306638. Agarwal, V., and Naik, N. Y. 2004. Risks and portfolio decisions involving hedge funds. Review of Financial Studies 17(1):63 98. Bardou, O.; Frikha, N.; and Pag es, G. 2009. Computing Va R and CVa R using stochastic approximation and adaptive unconstrained importance sampling. Monte Carlo Methods and Applications 15(3):173 210. B auerle, N., and Ott, J. 2011. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research 74(3):361 379. Baxter, J., and Bartlett, P. L. 2001. Infinite-horizon policygradient estimation. JAIR 15:319 350. Bertsekas, D. P. 2012. Dynamic Programming and Optimal Control, Vol II. Athena Scientific, 4th edition. Borkar, V., and Jain, R. 2014. Risk-constrained Markov decision processes. IEEE TAC PP(99):1 1. Borkar, V. S. 2001. A sensitivity formula for risk-sensitive cost and the actor critic algorithm. Systems & Control Letters 44(5):339 346. Fu, M. C. 2006. Gradient estimation. In Henderson, S. G., and Nelson, B. L., eds., Simulation, volume 13 of Handbooks in Operations Research and Management Science. Elsevier. 575 616. Furmston, T., and Barber, D. 2012. A unifying perspective of parametric policy search methods for Markov decision processes. In Advances in Neural Information Processing Systems 25. Gabillon, V.; Ghavamzadeh, M.; and Scherrer, B. 2013. Approximate dynamic programming finally performs well in the game of tetris. In Advances in Neural Information Processing Systems 26. Glynn, P. W. 1990. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM 33(10):75 84. Hong, L. J., and Liu, G. 2009. Simulating sensitivities of conditional value at risk. Management Science. Iyengar, G., and Ma, A. 2013. Fast gradient descent method for mean-CVa R optimization. Annals of Operations Research 205(1):203 212. Kakade, S. 2001. A natural policy gradient. In Advances in Neural Information Processing Systems 14. Kushner, H., and Yin, G. 2003. Stochastic approximation and recursive algorithms and applications. Springer Verlag. Marbach, P., and Tsitsiklis, J. N. 1998. Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control 46(2):191 209. Morimura, T.; Sugiyama, M.; Kashima, H.; Hachiya, H.; and Tanaka, T. 2010. Nonparametric return distribution approximation for reinforcement learning. In International Conference on Machine Learning, 799 806. Peters, J., and Schaal, S. 2008. Reinforcement learning of motor skills with policy gradients. Neural Networks 21(4):682 697. Prashanth, L., and Ghavamzadeh, M. 2013. Actor-critic algorithms for risk-sensitive mdps. In Advances in Neural Information Processing Systems 26. Prashanth, L. 2014. Policy gradients for CVa R-constrained MDPs. In International Conference on Algorithmic Learning Theory. Rockafellar, R. T., and Uryasev, S. 2000. Optimization of conditional value-at-risk. Journal of risk 2:21 42. Rubinstein, R. Y., and Kroese, D. P. 2011. Simulation and the Monte Carlo method. John Wiley & Sons. Scaillet, O. 2004. Nonparametric estimation and sensitivity analysis of expected shortfall. Mathematical Finance. Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning: An introduction. Cambridge Univ Press. Sutton, R. S.; Mc Allester, D.; Singh, S.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 13. Tamar, A.; Di Castro, D.; and Mannor, S. 2012. Policy gradients with variance related risk criteria. In International Conference on Machine Learning. Tamar, A.; Glassner, Y.; and Mannor, S. 2014. Optimizing the CVa R via sampling. ar Xiv:1404.3862. Thiery, C., and Scherrer, B. 2009. Improvements on learning tetris with cross entropy. International Computer Games Association Journal 32. Tsitsiklis, J. N., and Van Roy, B. 1996. Feature-based methods for large scale dynamic programming. Machine Learning 22(1-3):59 94.