# divergenceaugmented_policy_optimization__d921305e.pdf Divergence-Augmented Policy Optimization Qing Wang Huya AI Guangzhou, China Yingru Li The Chinese University of Hong Kong Shenzhen, China Jiechao Xiong Tencent AI Lab Shenzhen, China Tong Zhang The Hong Kong University of Science and Technology Hong Kong, China In deep reinforcement learning, policy optimization methods need to deal with issues such as function approximation and the reuse of off-policy data. Standard policy gradient methods do not handle off-policy data well, leading to premature convergence and instability. This paper introduces a method to stabilize policy optimization when off-policy data are reused. The idea is to include a Bregman divergence between the behavior policy that generates the data and the current policy to ensure small and safe policy updates with off-policy data. The Bregman divergence is calculated between the state distributions of two policies, instead of only on the action probabilities, leading to a divergence augmentation formulation. Empirical experiments on Atari games show that in the data-scarce scenario where the reuse of off-policy data becomes necessary, our method can achieve better performance than other state-of-the-art deep reinforcement learning algorithms. 1 Introduction In recent years, many algorithms based on policy optimization have been proposed for deep reinforcement learning (DRL), leading to great successes in Go, video games, and robotics (Silver et al., 2016; Mnih et al., 2016; Schulman et al., 2015, 2017b). Real-world applications of policy-based methods commonly involve function approximation and data reuse. Typically, the reused data are generated with an earlier version of the policy, leading to off-policy learning. It is known that these issues may cause premature convergence and instability for policy gradient methods (Sutton et al., 2000; Sutton and Barto, 2017). A standard technique that allows policy optimization methods to handle off-policy data is to use importance sampling to correct trajectories from the behavior policy that generates the data to the target policy (e.g. Retrace (Munos et al., 2016) and V-trace (Espeholt et al., 2018)). The efficiency of these methods depends on the divergence between the behavior policy and the target policy. Moreover, to improve stability of training, one may introduce a regularization term (e.g. Shannon-Gibbs entropy in (Mnih et al., 2016)), or use a proximal objective of the original policy gradient loss (e.g. clipping in (Schulman et al., 2017b; Wang et al., 2016a)). Although the well-adopted method of entropy regularization can stabilize the optimization process (Mnih et al., 2016), this additional entropy regularization alters the learning objective, and prevent the algorithm from converging to the optimal action for each state. Even for the simple case of bandit problems, the monotonic diminishing regularization may fail to converge to the best arm (Cesa-Bianchi et al., 2017). In this work, we propose a method for policy optimization by adding a Bregman divergence term, which leads to more stable and sample efficient off-policy learning. The Bregman divergence The work was done when the first author was at Tencent AI Lab. 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. constraint is widely used to explore and exploit optimally in mirror descent methods (Nemirovsky and Yudin, 1983), in which specific form of divergence can attain the optimal rate of regret (sample efficiency) for bandit problems (Audibert et al., 2011; Bubeck and Cesa-Bianchi, 2012). In contrast to the traditional approach of constraining the divergence between target policy and behavior policy conditioned on each state (Schulman et al., 2015), we consider the divergence over the joint stateaction space. We show that the policy optimization problem with Bregman divergence on state-action space is equivalent to the standard policy gradient method with divergence-augmented advantage. Under this view, the divergence-augmented policy optimization method not only considers the divergence on the current state but also takes into account the discrepancy of policies on future states, thus can provide a better constraint on the change of policy and encourage deeper exploration. We experiment with the proposed method on the commonly used Atari 2600 environment from Arcade Learning Environment (ALE) (Bellemare et al., 2013). Empirical results show that divergenceaugmented policy optimization method performs better than the state-of-the-art algorithm under data-scarce scenarios, i.e., when the sample generating speed is limited and samples in replay memory are reused multiple times. We also conduct a comparative study for the major effect of improvement on these games. The article is organized as follows: we give the basic background and notations in Section 2. The main method of divergence-augmented policy optimization is presented in Section 3, with connections to previous works discussed in Section 4. Empirical results and studies can be found in Section 5. We conclude this work with a short discussion in Section 6. 2 Preliminaries In this section, we state the basic definition of the Markov decision process considered in this work, as well as the Bregman divergence used in the following discussions. 2.1 Markov Decision Process We consider a Markov decision process (MDP) with infinite-horizon and discounted reward, denoted by M = (S, A, P, r, d0, γ), where S is the finite state space, A is the finite action space, P : S A (S) is the transition function, where (S) means the space of all probability distributions on S. A reward function is denoted by r : S A R. The distribution of initial state s0 is denoted by d0 (S). And a discount factor is denoted by γ (0, 1). A stochastic policy is denoted by π : S (A). The space of all policies is denoted by Π. We use the following standard notation of state-value V π(st), action-value Qπ(st, at) and advantage Aπ(st, at), defined as V π(st) = Eπ|st P l=0 γlr(st+l, at+l), Qπ(st, at) = Eπ|st,at P l=0 γlr(st+l, at+l), and Aπ(st, at) = Qπ(st, at) V π(st), where Eπ|st means al π(a|sl), sl+1 P(sl+1|sl, al), l t, and Eπ|st,at means sl+1 P(sl+1|sl, al), al+1 π(a|sl+1), l t. We also define the space of policy-induced state-action distributions under M as Π = {µ (S A) : X a µ(s , a ) = (1 γ)d0(s ) + γ X s,a P(s |s, a)µ(s, a), s S} (1) We use the notation µπ for the state-action distribution induced by π. On the other hand, for each µ Π, there also exists a unique policy πµ(a|s) = µ(s,a) P b µ(s,b) which induces µ. We define the state distribution dπ as dπ(s) = (1 γ)Eτ|π P t=0 γt1(st = s). Then we have µπ(s, a) = dπ(s)π(a|s). We sometimes write πµt as πt and dπt as dt when there is no ambiguity. In this paper, we mainly focus on the performance of a policy π defined as J(π) = (1 γ)Eτ|π t=0 γtr(st, at) = Edπ,πr(s, a) (2) where Eτ|π means s0 d0, at π(at|st), st+1 P(st+1|st, at), t 0. We use the notation Ed,π = Es d( ),a π( |s) for brevity. 2.2 Bregman Divergence We define Bregman divergence (Bregman, 1967) as follows (e.g. Definition 5.3 in (Bubeck and Cesa-Bianchi, 2012)). For D Rd an open convex set, the closure of D as D, we consider a Legendre function F : D R defined as (1) F is strictly convex and admits continuous first partial derivatives on D, and (2) limx D\D F = + . For function F, we define the Bregman divergence DF : D D R as DF (x, y) = F(x) F(y) F(y), x y . The inner product is defined as x, y = P i xiyi. For K D and K D = , the Bregman projection z = arg min x K DF (x, y) exists uniquely for all y D. Specifically, for F(x) = P i xi log(xi) P i xi, we recover the Kullback-Leibler (KL) divergence as DKL(µ , µ) = X s,a µ (s, a) log µ (s, a) for µ, µ (S A) and π, π Π. To measure the distance between two policies π and π , we also use the symbol for conditional Bregman divergence 2 associated with state distribution d denoted as Dd F (π , π) = X s d(s)DF (π ( |s), π( |s)). (3) In this section, we present the proposed method from the motivation of mirror descent and then discuss the parametrization and off-policy correction we employed in the practical learning algorithm. 3.1 Policy Optimization and Mirror Descent The mirror descent (MD) method (Nemirovsky and Yudin, 1983) is a central topic in the optimization and online learning research literature. As a first-order method for optimization, the mirror descent method can recover several interesting algorithms discovered previously (Sutton et al., 2000; Kakade, 2002; Peters et al., 2010; Schulman et al., 2015). On the other hand, as an online learning method, the online (stochastic) mirror descent method can achieve (near-)optimal sample efficiency for a wide range of problems (Audibert and Bubeck, 2009; Audibert et al., 2011; Zimin and Neu, 2013). In this work, following a series of previous works (Zimin and Neu, 2013; Neu et al., 2017), we investigate the (online) mirror descent method for policy optimization. We denote the state-action distribution at iteration t as µt, and ℓt(µ) = gt, µ as the linear loss function for µ at iteration t. Without otherwise noted, we consider the negative reward as the loss objective ℓt(µ) = r, µ , which also corresponds to the policy performance ℓt(µ) J(πµ) by Formula (2). We consider the mirror map method associated with Legendre function F as F( µt+1) = F(µt) ηgt (4) µt+1 Π Π( µt+1), (5) where µt+1 (S A) and gt = ℓt(µt). It is well-known (Beck and Teboulle, 2003) that an equivalent formulation of mirror map (4) is µt+1 = arg min µ Π DF (µ, µt+1) (6) = arg min µ Π DF (µ, µt) + η gt, µ , (7) The former formulation (6) takes the view of non-linear sub-gradient projection in convex optimization, while the later formulation (7) can be interpreted as a regularized optimization and is the usual definition of mirror descent (Nemirovsky and Yudin, 1983; Beck and Teboulle, 2003; Bubeck, 2015). In this work, we will mostly investigate the approximate algorithm in the later formulation (7). 2Note that Dd F may not be a Bregman divergence. 3.2 Parametric Policy-based Algorithm In the mirror descent view for policy optimization on state-action space as in Formula (7), we need to compute the projection of µ onto the space of Π. For the special case of KL-divergence on µ, the sub-problem of finding minimum in (7) can be done efficiently, assuming the knowledge of transition function P (See Proposition 1 in (Zimin and Neu, 2013)). However, for a general divergence and realworld problems with unknown transition matrices, the projection in (7) is non-trivial to implement. In this section, we consider direct optimization in the (parametric) policy space without explicit projection. Specifically, we consider µπ as a function of π, and π parametrized as πθ. The Formula (7) can be written as πt+1 = arg min π DF (µπ, µt) + η gt, µπ . (8) Instead of solving globally, we approximate Formula (8) with gradient descent on π. From the celebrated policy gradient theorem (Sutton et al., 2000), we have the following lemma: Lemma 1. (Policy Gradient Theorem (Sutton et al., 2000)) For dπ and µπ defined previously, the following equation holds for any state-action function f : S A R: X s,a f(s, a) θµπ(s, a) = X s,a dπ(s)Qπ(f)(s, a) θπ(a|s), where Qπ is defined as an operator such that Qπ(f)(s, a) = Eπ|st=s,at=a l=0 γlf(st+l, at+l). Decomposing the loss and divergence in two parts (8), we have θ gt, µπ = dπQπ(gt), θπ(a|s) , (9) which is the usual policy gradient, and θDF (µπ, µt) = F(µπ) F(µt), θµπ = dπQπ ( F(µπ) F(µt)) , θπ(a|s) . (10) Similarly, we have the policy gradient for the conditional divergence (3) as θDdt F (π, πt) = dt( F(π) F(πt)), θπ(a|s) , which does not have a discounted sum, since dt is fixed and independent of π = πµ. 3.3 Off-policy Correction In this section, we discuss the practical method for estimating Qπ(f) under a behavior policy πt. In distributed reinforcement learning with asynchronous gradient update, the policy πt which generated the trajectories may deviate from the policy πθ currently being optimized. Thus off-policy correction is usually needed for the robustness of the algorithm (e.g. V-trace as in IMPALA (Espeholt et al., 2018)). Consider X s,a dπ(s)Qπ(f)(s, a) θπ(a|s) = E(s,a) πdπQπ(f)(s, a) θ log π(a|s) = E(s,a) πtdπt dπ(s) dπt(s) π(a|s) πt(a|s)Qπ(f)(s, a) θ log π(a|s) for f = gt or f = F(µπ) F(µt). We would like to have an accurate estimation of Qπ(gt) (9) and Qπ( F(µπ) F(µt)) (10), and correct the deviation from dπt to dπ and πt to π. For the estimation of Qπ(f) under a behavior policy πt, possible methods include Retrace (Munos et al., 2016) providing an estimator of state-action value Qπ(f), and V-trace (Espeholt et al., 2018) providing an estimator of state value Ea πQπ(f)(s, a). In this work, we utilize the V-trace (Section 4.1 (Espeholt et al., 2018)) estimation vsi = vi along a trajectory starting at (si, ai = s, a) under πt. Details of multi-step Q-value estimation can be found in Appendix A. With the value estimation vs, the Qπ(gt) is estimated with ˆAs,a = ri + γvi+1 Vθ(si). (11) We subtract a baseline Vθ(si) to reduce variance in estimation, as Eπt,dt πθ πt Vθ(s) θ log πθ = 0. For the estimation of Qπ( F(µπ) F(µt)), we use the n-steps truncated importance sampling as ˆDs,a = f(si, ai) + k=0 ci+k)ρi+jf(si+j, ai+j). (12) in which we use the notation cj = min( c D, πθ(aj|sj) πt(aj|sj) ) and ρj = min( ρD, πθ(aj|sj) πt(aj|sj) ). The formula also corresponds to V-trace under the condition V ( ) 0. For RNN model trained on continuous roll-out samples, we set n equals to the max-length till the end of roll-out. For the correction of state distribution dπ(s)/dπt(s), previous solutions include the use of emphatic algorithms as in (Sutton et al., 2016), or through an estimate of state density ratio as in (Liu et al., 2018). However, in our experience, the estimation of density ratio will introduce additional error, which may lead to worse performance in practice. Therefore in this paper, we propose a different solution by restricting our attention to the correction of πt to π via importance sampling and omitting the difference of dπ/dπt in the algorithm. This introduces a bias in the gradient estimation, which we propose a new method to handle in this paper. Specifically, we show that although the omission of the state ratio introduces a bias in the gradient, the bias can be bounded by the regularization term of conditional KL divergence (see Appendix B). Therefore by explicitly adding an KL divergence regularization, we can effectively control the degree of off-policy bias caused by dπ/dπt in that small regularization value implies a small bias. This approach naturally combines mirror descent with KL divergence regularization, leading to a more stable algorithm that is robust to off-policy data, as we will demonstrate by empirical experiments. The final loss consists of the policy loss Lπ(θ) and the value loss Lv(θ). To be specific, the gradient of policy loss is defined as θLπ(θ) = Eπt,dt π πt ( ˆDs,a η ˆAs,a) θ log π. (13) We can also use proximal methods like PPO (Schulman et al., 2017b) in conjunction with divergence augmentation. A practical implementation is elaborated later in Formula (19). In addition to the policy loss, we also update Vθ with value gradient defined as Lv(θ) = Eπt,dt π πt (Vθ(s) vs) θVθ(s), (14) where vs = vsi is the multi-step value estimation with V-trace. The parameter θ is then updated with a mixture of policy loss and value loss θ θ αt( θLπ(θ) + b θLv(θ)), (15) in which αt is the current learning rate, and b is the loss scaling coefficient. The algorithm is summarized in Algorithm 1. 4 Related Works The policy performance in Equation (2) and the well-known policy difference lemma (Kakade and Langford, 2002) serve a fundamental role in policy-based reinforcement learning (e.g TRPO, PPO (Schulman et al., 2015, 2017b)). The gradient with respect to the policy performance and policy difference provides a natural direction for policy optimization. And to restrict the changes in each policy improvement step, as well as encouraging exploration at the early stage, the constraint-based policy optimization methods try to limit the changes in the policy by constraining the divergence between behavior policy and current policy. The use of entropy maximization in reinforcement learning can be dated back to the work of Williams and Peng (1991). And methods with relative entropy regularization include Peters et al. (2010); Schulman et al. (2015). The relationship between these methods and the mirror descent method has been discussed in Neu et al. (2017). With Algorithm 1 Divergence-Augmented Policy Optimization (DAPO) Input: DF (µ , µ), total iteration T, batch size M, learning rate αt. Initialize : randomly initiate θ0 for t = 0 to T do (in parallel) Use πt = πθt to generate trajectories. for m = 1 to M do Sample (si, ai) S A w.p. dtπt. Estimate state value vsi (e.g. by V-trace). Calculate Q-value estimation ˆAs,a (11) and divergence estimation ˆDs,a (12). ˆAs,a = ri + γvi+1 Vθ(si), ˆDs,a = f(si, ai) + Pn j=1 γj(Qj 1 k=0 ci+k)ρi+jf(si+j, ai+j). Update θ with respect of policy loss (13, optionally 19) and value loss (14) θ θ αt( θLπ(θ) + b θLv(θ)). end for Set θt+1 = θ. end for notations in this work, consider the natural choice of F as the negative Shannon entropy defined as F(x) = P i xi log(xi), the DF ( , ) becomes the KL-divergence DKL( , ). By the equivalence of sub-gradient projection (6) and mirror descent (7), the mirror descent policy optimization with KL-divergence can be written as µt+1 = arg min µ Π DKL(µ, µt+1) = arg min µ Π DKL(µ, µt) + η gt, µ . (16) Under slightly different settings, this learning objective is the regularized version of the constrained optimization problem considered in Relative Entropy Policy Search (REPS) (Peters et al., 2010); And for ℓt(µ) depending on t, the Equation (16) can also recover the O-REPS method considered in Zimin and Neu (2013). On the other hand, as the KL-divergence (and Bregman divergence) is asymmetric, we can also replace the DF (x, y) in either formulation (6, 7) with reverse KL DKL(y, x), which will result in different iterative algorithms (as the reverse KL is no longer a Bregman divergence, the equivalence of Formula (6) and (7) no longer holds). Consider replacing DF (µ, µt+1) with DKL( µt+1, µ) in sub-gradient projection (6), we have the mirror map method with reverse KL as µt+1 = arg min µ Π DKL( µt+1, µ), (17) which is essentially the MPO algorithm (Abdolmaleki et al., 2018) under a probabilistic inference perspective, and MARWIL algorithm (Wang et al., 2018) when learning from off-policy data. Similarly, consider the replacement of DF (µ, µt) with DKL(µt, µ) in mirror descent (7), we have the mirror descent method with reverse KL as µt+1 = arg min µ Π DKL(µt, µ) + η gt, µ , (18) which can approximately recover the TRPO optimization objective (Schulman et al., 2015) (if the relative entropy between two state-action distributions DKL(µt, µ) in (18) is replaced by the conditional entropy Ddt KL(πt, π), also see Section 5.1 of Neu et al. (2017)). Besides, we note that there are other choices of constraint for policy optimization as well. For example, in (Lee et al., 2018; Chow et al., 2018; Lee et al., 2019), a Tsallis entropy is used to promote sparsity in the policy distribution. And in (Belousov and Peters, 2017), the authors generalize KL, Hellinger distance, and reversed KL to the class of f-divergence. In preliminary results, we found divergence based on 0-potential (Audibert et al., 2011; Bubeck and Cesa-Bianchi, 2012) is also promising for policy optimization. We left this for future research. For multi-step KL divergence regularized policy optimization, we note that the formulation also corresponds to the KL-divergence-augmented return considered previously in several works (Fox et al. (2015), Section 3 of Schulman et al. (2017a)), although in Schulman et al. (2017a) the authors use a fixed behavior policy instead of πt as in ours. More often, the Shannon-entropy-augmented return can be dated back to earlier works (Kappen, 2005; Todorov, 2007; Ziebart et al., 2008; Nachum et al., 2017), and is a central topic in soft reinforcement learning (Haarnoja et al., 2017, 2018). Relative Performance 19% Frostbite 7% Ms Pacman 6% Time Pilot 6% Centipede 6% Yars Revenge 4% Crazy Climber 4% Wizard Of Wor 3% Bank Heist 2% Road Runner 1% Riverraid 0% Montezuma Revenge 0% Journey Escape 0% Fishing Derby 0% Asteroids Private Eye 0% Jamesbond 0% Gravitar 2% Seaquest 3% Beam Rider 5% Battle Zone 5% Space Invaders 6% Tutankham 9% Chopper Command 11% Breakout 11% Kung Fu Master 12% Kangaroo 15% Ice Hockey 15% Name This Game 20% Assault 24% Demon Attack 24% Air Raid 25% Carnival 25% Robotank 38% Freeway 41% Star Gunner 43% Double Dunk 102% Atlantis 117% Enduro 134% Phoenix 153% Video Pinball 781% Easy Exploration Hard Exploration (Dense Reward) Hard Exploration (Sparse Reward) Unknown Figure 1: Relative score improvement of PPO+DA compared with PPO on 58 Atari environments. The relative performance is calculated as a proposed baseline max(human,baseline) random (Wang et al., 2016b). The Atari games are categorized according to Figure 4 of (Oh et al., 2018). The mirror descent method is originally introduced by the seminal work of Nemirovsky and Yudin (1983) as a convex optimization method. Also, the online stochastic mirror descent method has alternative views, e.g. Follow the Regularized Leader (Mc Mahan, 2011), and Proximal Point Algorithm (Rockafellar, 1976). For more discussions on mirror descent and online learning, we refer interested readers to the work of Cesa-Bianchi and Lugosi (2006) and Bubeck and Cesa-Bianchi (2012). 5 Experiments In the experiments, we test the exploratory effect of divergence augmentation comparing with entropy augmentation, and the empirical difference between multi-step and 1-step divergence. For the experiments, we mainly consider the DAPO algorithm (1) associated with the conditional KL divergence (see RC and DC in (Neu et al., 2017)). For F(µ) = P s,a µ(s, a) log µ(s,a) P b µ(s,b), we have the gradient in (10) as F(µπ) F(µt) = log π The multi-step divergence augmentation term as in (12) is then calculated as ˆDKL s,a = log π(ai|si) πt(ai|si) + k=1 ci+k)ρi+j log π(ai+j|si+j) πt(ai+j|si+j). As a baseline, we also implement the PPO algorithm with a V-trace (Espeholt et al., 2018) estimation of advantage function Aπ for target policy3. Specifically, we consider the policy loss as: LPPO π (θ) = Eπt,dt min(πθ πt As,a, clip(πθ πt , 1 ϵ, 1 + ϵ)As,a), (19) where we choose ϵ = 0.2 and the advantage is estimated by Rs,a. We also tested the DAPO algorithm with PPO, with the advantage estimation As,a in (19) replaced with ˆAs,a 1 η ˆDs,a defined in (11) and (12). We will refer to this algorithm as PPO+DA in the following sections. 5.1 Algorithm Settings The algorithm is implemented with Tensor Flow (Abadi et al., 2016). For efficient training with deep neural networks, we use the Adam (Kingma and Ba, 2014) method for optimization. The learning rate is linearly scaled from 1e-3 to 0. The parameters are updated according to a mixture of policy loss and value loss, with the loss scaling coefficient c = 0.5. In calculating multi-step λ-returns Rs,a 3In the original PPO (Schulman et al., 2017b) they use ˆA as the advantage estimation of behavior policy Aπt. 20 10 0 10 20 Double Dunk Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy 0 500 1500 2500 3500 Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy 200 400 600 800 1000 1400 Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy 0 50 100 150 200 250 300 Montezuma Revenge Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy 0e+00 2e+05 4e+05 Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy 0 5000 15000 25000 Training Time/hour PPO PPO+DA PPO+DA (1 step) PPO+Entropy Figure 2: Performance comparison of selected environments of Atari games. The performance of PPO, PPO+DA, PPO+DA (1-step), and PPO+Entropy are plotted in different colors. The score for each game is plotted on the y-axis with running time on the x-axis, as the algorithm is paralleled asynchronously in a distributed environment. For each line in the plots, we run the experiment 5 times with the same parameters and environment settings. The median scores are plotted in solid lines, while the regions between 25% and 75% quantiles are shaded with respective colors. and divergence Ds,a, we use fixed λ = 0.9 and γ = 0.99. The batch size is set to 1024, with roll-out length set to 32, resulting in 1024/32=32 roll-outs in a batch. The policy πt and value Vt is updated every 100 iterations (M = 100 in Algorithm 1). With our implementation, the training speed is about 25k samples per second, and the data generating speed is about 220 samples per second for each actor, resulting in about 3500 samples per second for a total of 16 actors. Note that the PPO results may not be directly comparable with other works (Schulman et al., 2017b; Espeholt et al., 2018; Xu et al., 2018), mainly due to the different number of actors used. Unless otherwise noted, each experiment is allowed to run 16000 seconds (about 4.5 hours), corresponding a total of 60M samples generated and 400M samples (with replacement) trained. Details of experimental settings can be found in Appendix A. 5.2 Empirical Results We test the algorithm on 58 Atari environments and calculate its relative performance with PPO (Schulman et al., 2017b). The empirical performance is plotted in Figure 1. We run PPO and PPO+DA with the same environmental settings and computational resources. The relative performance is calculated as proposed baseline max(human,baseline) random (Wang et al., 2016b). We also categorize the game environments into easy exploration games and hard exploration games (Oh et al., 2018). We see that with a KLdivergence-augmented return, the algorithm PPO+DA performs better than the baseline method, especially for the games that may have local minimums and require deeper exploration. We plot the learning curves of PPO+DA (in blue) comparing with PPO (in black) and other baseline methods on 6 typical environments in Figure 2. Detailed learning curves for PPO and PPO+DA for the complete 58 games can be found in Figure 3 in the Appendix. 5.2.1 Divergence augmentation vs Entropy augmentation We test the effect of divergence augmentation in contrast to the entropy augmentation (plotted in orange in Figure 2). Entropy augmentation can prevent premature convergence and encourage exploration as well as stabilize policy during optimization. However, the additional entropy may hinder the convergence to the optimal action, as it alters the original learning objective. We set f(s, a) as log π(a|s) in Formula (12), and experiment the algorithm with 1 η = 0.5, 0.1, 0.01, 0.001, in which we found that 1 η = 0.1 performs best. From the empirical results, we see that divergence-augmented PPO works better, while the entropy-augmented version may be too conservative on policy changes, resulting in inferior performance on these games. 5.2.2 Multi-step divergence vs 1-step divergence In Figure 2, we also test the PPO+DA algorithm with its 1-step divergence-augmented counterpart (plotted in green). We rerun the experiments with the parameter c D (Formula (12)) set to 0, which means we only aggregate the divergence on the current state and action f(si, ai), without summing up future discounted divergence f(si+j, ai+j). This method also relates to the conditional divergence defined in Formula (3), and shares more similarities with previous works on regularized and constrained policy optimization methods (Schulman et al., 2015; Achiam et al., 2017). We see that with multi-step divergence augmentation, the algorithm can achieve high scores, especially on games requiring deeper exploration like Enduro and Qbert. We hypothesize that the accumulated divergence on future states can encourage the policy to explore more efficiently. 6 Conclusion In this paper, we proposed a divergence-augmented policy optimization method to improve the stability of policy gradient methods when it is necessary to reuse off-policy data. We showed that the proposed divergence augmentation technique can be viewed as imposing Bregman divergence constraint on the state-action space, which is related to online mirror descent methods. Experiments on Atari games showed that in the data-scarce scenario, the proposed method works better than other state-of-the-art algorithms such as PPO. Our results showed that the technique of divergence augmentation is effective when data generated by previous policies are reused in policy optimization. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P. A., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zhang, X. (2016). Tensorflow: A system for large-scale machine learning. ar Xiv preprint ar Xiv:1605.08695. Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. (2018). Maximum a posteriori policy optimisation. In International Conference on Learning Representations. Achiam, J., Held, D., Tamar, A., and Abbeel, P. (2017). Constrained policy optimization. ar Xiv preprint ar Xiv:1705.10528. Asmussen, S. (2003). Markov chains. Applied Probability and Queues, pages 3 38. Audibert, J.-Y. and Bubeck, S. (2009). Minimax policies for adversarial and stochastic bandits. In In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), pages 217 226. Audibert, J.-Y., Bubeck, S., and Lugosi, G. (2011). Minimax policies for combinatorial prediction games. In Kakade, S. M. and von Luxburg, U., editors, Proceedings of the 24th Annual Conference on Learning Theory (COLT), volume 19 of Proceedings of Machine Learning Research, pages 107 132, Budapest, Hungary. PMLR. Beck, A. and Teboulle, M. (2003). Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167 175. Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253 279. Belousov, B. and Peters, J. (2017). f-Divergence constrained policy improvement. Ar Xiv e-prints. Bregman, L. M. (1967). The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 7(3):200 217. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). Openai gym. Bubeck, S. (2015). Convex optimization: Algorithms and complexity. Foundations and Trends R in Machine Learning, 8(3-4):231 357. Bubeck, S. and Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends R in Machine Learning, 5(1):1 122. Cesa-Bianchi, N., Gentile, C., Lugosi, G., and Neu, G. (2017). Boltzemann exploration done right. In Advances in Neural Information Processing Systems, pages 6284 6293. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and games. Cambridge university press. Chow, Y., Nachum, O., and Ghavamzadeh, M. (2018). Path consistency learning in tsallis entropy regularized mdps. In International Conference on Machine Learning, pages 978 987. Csiszar, I. and Körner, J. (2011). Information theory: coding theorems for discrete memoryless systems. Cambridge University Press. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., and Kavukcuoglu, K. (2018). IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. ar Xiv preprint ar Xiv:1802.01561. Fox, R., Pakman, A., and Tishby, N. (2015). Taming the noise in reinforcement learning via soft updates. ar Xiv preprint ar Xiv:1512.08562. Haarnoja, T., Tang, H., Abbeel, P., and Levine, S. (2017). Reinforcement learning with deep energy-based policies. ar Xiv preprint ar Xiv:1702.08165. Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ar Xiv preprint ar Xiv:1801.01290. Hasselt, H. v., Guez, A., and Silver, D. (2016). Deep reinforcement learning with double q-learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2094 2100. AAAI Press. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735 1780. Horgan, D., Quan, J., Budden, D., Barth-Maron, G., Hessel, M., Van Hasselt, H., and Silver, D. (2018). Distributed prioritized experience replay. ar Xiv preprint ar Xiv:1803.00933. Kakade, S. and Langford, J. (2002). Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning, pages 267 274. Kakade, S. M. (2002). A natural policy gradient. In Advances in neural information processing systems, pages 1531 1538. Kappen, H. J. (2005). Path integrals and symmetry breaking for optimal control theory. Journal of statistical mechanics: theory and experiment, 2005(11):P11011. Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980. Lee, K., Choi, S., and Oh, S. (2018). Maximum causal tsallis entropy imitation learning. In Advances in Neural Information Processing Systems, pages 4403 4413. Lee, K., Kim, S., Lim, S., Choi, S., and Oh, S. (2019). Tsallis reinforcement learning: A unified framework for maximum entropy reinforcement learning. Liu, Q., Li, L., Tang, Z., and Zhou, D. (2018). Breaking the curse of horizon: Infinite-horizon off-policy estimation. In Advances in Neural Information Processing Systems, pages 5361 5371. Mc Mahan, B. (2011). Follow-the-regularized-leader and mirror descent: Equivalence theorems and l1 regularization. In Gordon, G., Dunson, D., and Dudík, M., editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pages 525 533, Fort Lauderdale, FL, USA. PMLR. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928 1937. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529 533. Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. (2016). Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pages 1054 1062. Nachum, O., Norouzi, M., Xu, K., and Schuurmans, D. (2017). Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems, pages 2775 2785. Nemirovsky, A. S. and Yudin, D. B. (1983). Problem complexity and method efficiency in optimization. Wiley. Neu, G., Jonsson, A., and Gómez, V. (2017). A unified view of entropy-regularized markov decision processes. ar Xiv preprint ar Xiv:1705.07798. Oh, J., Guo, Y., Singh, S., and Lee, H. (2018). Self-imitation learning. In International Conference on Machine Learning, pages 3875 3884. Peters, J., Mülling, K., and Altun, Y. (2010). Relative entropy policy search. In AAAI, pages 1607 1612. Rockafellar, R. T. (1976). Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877 898. Schulman, J., Chen, X., and Abbeel, P. (2017a). Equivalence between policy gradients and soft q-learning. ar Xiv preprint ar Xiv:1704.06440. Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. (2015). Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889 1897. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017b). Proximal policy optimization algorithms. ar Xiv preprint ar Xiv:1707.06347. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484 489. Sutton, R. S. and Barto, A. G. (2017). Introduction to reinforcement learning, 2nd edition, in progress. MIT press. Sutton, R. S., Mahmood, A. R., and White, M. (2016). An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 17(1):2603 2631. Sutton, R. S., Mc Allester, D. A., Singh, S. P., and Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057 1063. Todorov, E. (2007). Linearly-solvable markov decision problems. In Advances in neural information processing systems, pages 1369 1376. Wang, Q., Xiong, J., Han, L., Sun, P., Liu, H., and Zhang, T. (2018). Exponentially weighted imitation learning for batched historical data. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31, pages 6291 6300. Curran Associates, Inc. Wang, Z., Bapst, V., Heess, N., Mnih, V., Munos, R., Kavukcuoglu, K., and de Freitas, N. (2016a). Sample efficient actor-critic with experience replay. ar Xiv preprint ar Xiv:1611.01224. Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., and Freitas, N. (2016b). Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning, pages 1995 2003. Williams, R. J. and Peng, J. (1991). Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241 268. Xu, Z., van Hasselt, H. P., and Silver, D. (2018). Meta-gradient reinforcement learning. In Advances in neural information processing systems, pages 2396 2407. Ziebart, B. D., Maas, A. L., Bagnell, J. A., and Dey, A. K. (2008). Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433 1438. Chicago, IL, USA. Zimin, A. and Neu, G. (2013). Online learning in episodic markovian decision processes by relative entropy policy search. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 26. Curran Associates, Inc.