# continuous_deep_qlearning_with_modelbased_acceleration__dc382460.pdf Continuous Deep Q-Learning with Model-based Acceleration Shixiang Gu1 2 3 SG717@CAM.AC.UK Timothy Lillicrap4 COUNTZERO@GOOGLE.COM Ilya Sutskever3 ILYASU@GOOGLE.COM Sergey Levine3 SLEVINE@GOOGLE.COM 1University of Cambridge 2Max Planck Institute for Intelligent Systems 3Google Brain 4Google Deep Mind Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using highdimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized advantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable. 1. Introduction Model-free reinforcement learning (RL) has been successfully applied to a range of challenging problems (Kober & Peters, 2012; Deisenroth et al., 2013), and has recently been extended to handle large neural network policies and Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). value functions (Mnih et al., 2015; Lillicrap et al., 2016; Wang et al., 2015; Heess et al., 2015; Hausknecht & Stone, 2015; Schulman et al., 2015). This makes it possible to train policies for complex tasks with minimal feature and policy engineering, using the raw state representation directly as input to the neural network. However, the sample complexity of model-free algorithms, particularly when using very high-dimensional function approximators, tends to be high (Schulman et al., 2015), which means that the benefit of reduced manual engineering and greater generality is not felt in real-world domains where experience must be collected on real physical systems, such as robots and autonomous vehicles. In such domains, the methods of choice have been efficient model-free algorithms that use more suitable, task-specific representations (Peters et al., 2010; Deisenroth et al., 2013), as well as model-based algorithms that learn a model of the system with supervised learning and optimize a policy under this model (Deisenroth & Rasmussen, 2011; Levine et al., 2016). Using task-specific representations dramatically improves efficiency, but limits the range of tasks that can be learned and requires greater domain knowledge. Using model-based RL also improves efficiency, but limits the policy to only be as good as the learned model. For many real-world tasks, it may be easier to represent a good policy than to learn a good model. For example, a simple robotic grasping behavior might only require closing the fingers at the right moment, while the corresponding dynamics model requires learning the complexities of rigid and deformable bodies undergoing frictional contact. It is therefore desirable to bring the generality of model-free deep reinforcement learning into real-world domains by reducing their sample complexity. In this paper, we propose two complementary techniques for improving the efficiency of deep reinforcement learning in continuous control domains: we derive a variant of Q-learning that can be used in continuous domains, and we propose a method for combining this continuous Qlearning algorithm with learned models so as to accelerate learning while preserving the benefits of model-free RL. Model-free reinforcement learning in domains with contin- Continuous Deep Q-Learning with Model-based Acceleration uous actions is typically handled with policy search methods (Peters & Schaal, 2006; Peters et al., 2010). Integrating value function estimation into these techniques results in actor-critic algorithms (Hafner & Riedmiller, 2011; Lillicrap et al., 2016; Schulman et al., 2016), which combine the benefits of policy search and value function estimation, but at the cost of training two separate function approximators. Our proposed Q-learning algorithm for continuous domains, which we call normalized advantage functions (NAF), avoids the need for a second actor or policy function, resulting in a simpler algorithm. The simpler optimization objective and the choice of value function parameterization result in an algorithm that is substantially more sample-efficient when used with large neural network function approximators on a range of continuous control domains. Beyond deriving an improved model-free deep reinforcement learning algorithm, we also seek to incorporate elements of model-based RL to accelerate learning, without giving up the strengths of model-free methods. One approach is for off-policy algorithms such as Q-learning to incorporate off-policy experience produced by a model-based planner. However, while this solution is a natural one, our empirical evaluation shows that it is ineffective at accelerating learning. As we discuss in our evaluation, this is due in part to the nature of value function estimation algorithms, which must experience both good and bad state transitions to accurately model the value function landscape. We propose an alternative approach to incorporating learned models into our continuous-action Q-learning algorithm based on imagination rollouts: on-policy samples generated under the learned model, analogous to the Dyna-Q method (Sutton, 1990). We show that this is extremely effective when the learned dynamics model perfectly matches the true one, but degrades dramatically with imperfect learned models. However, we demonstrate that iteratively fitting local linear models to the latest batch of on-policy or offpolicy rollouts provides sufficient local accuracy to achieve substantial improvement using short imagination rollouts in the vicinity of the real-world samples. Our paper provides three main contributions: first, we derive and evaluate a Q-function representation that allows for effective Q-learning in continuous domains. Second, we evaluate several na ıve options for incorporating learned models into model-free Q-learning, and we show that they are minimally effective on our continuous control tasks. Third, we propose to combine locally linear models with local on-policy imagination rollouts to accelerate modelfree continuous Q-learning, and show that this produces a large improvement in sample complexity. We evaluate our method on a series of simulated robotic tasks and compare to prior methods. 2. Related Work Deep reinforcement learning has received considerable attention in recent years due to its potential to automate the design of representations in RL. Deep reinforcement learning and related methods have been applied to learn policies to play Atari games (Mnih et al., 2015; Schaul et al., 2015) and perform a wide variety of simulated and real-world robotic control tasks (Hafner & Riedmiller, 2011; Lillicrap et al., 2016; Levine & Koltun, 2013; de Bruin et al., 2015; Hafner & Riedmiller, 2011). While the majority of deep reinforcement learning methods in domains with discrete actions, such as Atari games, are based around value function estimation and Q-learning (Mnih et al., 2015), continuous domains typically require explicit representation of the policy, for example in the context of a policy gradient algorithm (Schulman et al., 2015). If we wish to incorporate the benefits of value function estimation into continuous deep reinforcement learning, we must typically use two networks: one to represent the policy, and one to represent the value function (Lillicrap et al., 2016; Schulman et al., 2016). In this paper, we instead describe how the simplicity and elegance of Q-learning can be ported into continuous domains, by learning a single network that outputs both the value function and policy. Our Q-function representation is related to dueling networks (Wang et al., 2015), though our approach applies to continuous action domains. Our empirical evaluation demonstrates that our continuous Q-learning algorithm achieves faster and more effective learning on a set of benchmark tasks compared to continuous actor-critic methods, and we believe that the simplicity of this approach will make it easier to adopt in practice. Our Q-learning method is also related to the work of Rawlik et al. (2013), but the form of our Q-function update is more standard. As in standard RL, model-based deep reinforcement learning methods have generally been more efficient (Nguyen & Widrow, 1989; Schmidhuber, 1991; Li & Todorov, 2004; Watter et al., 2015; Li & Todorov, 2004; Wahlstr om et al., 2015; Levine & Koltun, 2013), while model-free algorithms tend to be more generally applicable but substantially slower (Koutn ık et al., 2013; Schulman et al., 2015; Lillicrap et al., 2016). Combining model-based and modelfree learning has been explored in several ways in the literature. The method closest to our imagination rollouts approach is Dyna-Q (Sutton, 1990), which uses simulated experience in a learned model to supplement real-world on-policy rollouts. As we show in our evaluation, using Dyna-Q style methods to accelerate model-free RL is very effective when the learned model perfectly matches the true model, but degrades rapidly as the model becomes worse. Approximate Model-Assisted Neural Fitted Q-Iteration (AMA-NFQ) (Lampe & Riedmiller, 2014) studies a similar approach for batch variant of Q-learning Continuous Deep Q-Learning with Model-based Acceleration and achieves significant reduction in sample complexity for a simple benchmark task. However, AMA-NFQ relies on fitting neural networks for dynamics, which we empirically find is difficult for a broader range of tasks. We demonstrate that using iteratively refitted local linear models achieves substantially better results with imagination rollouts than more complex neural network models. We hypothesize that this is likely due to the fact that the more expressive models themselves require substantially more data, and that otherwise efficient algorithms like Dyna-Q are vulnerable to poor model approximations. 3. Background In reinforcement learning, the goal is to learn a policy to control a system with states x X and actions u U in environment E, so as to maximize the expected sum of returns according to a reward function r(x, u). The dynamical system is defined by an initial state distribution p(x1) and a dynamics distribution p(xt+1|xt, ut). At each time step t [1, T], the agent chooses an action ut according to its current policy π(ut|xt), and observes a reward r(xt, ut). The agent then experiences a transition to a new state sampled from the dynamics distribution, and we can express the resulting state visitation frequency of the policy π as ρπ(xt). Define Rt = PT i=t γ(i t)r(xi, ui), the goal is to maximize the expected sum of returns, given by R = Eri 1,xi 1 E,ui 1 π[R1], where γ is a discount factor that prioritizes earlier rewards over later ones. With γ < 1, we can also set T = , though we use a finite horizon for all of the tasks in our experiments. The expected return R can be optimized using a variety of model-free and model-based algorithms. In this section, we review several of these methods that we build on in our work. Model-Free Reinforcement Learning. When the system dynamics p(xt+1|xt, ut) are not known, as is often the case with physical systems such as robots, policy gradient methods (Peters & Schaal, 2006) and value function or Q-function learning with function approximation (Sutton et al., 1999) are often preferred. Policy gradient methods provide a simple, direct approach to RL, which can succeed on high-dimensional problems, but potentially requires a large number of samples (Schulman et al., 2015; 2016). Off-policy algorithms that use value or Q-function approximation can in principle achieve better data efficiency (Lillicrap et al., 2016). However, adapting such methods to continuous tasks typically requires optimizing two function approximators on different objectives. We instead build on standard Q-learning, which has a single objective. We summarize Q-learning in this section. The Q function Qπ(xt, ut) corresponding to a policy π is defined as the expected return from xt after taking action ut and following the policy π thereafter: Qπ(xt, ut) = Eri t,xi>t E,ui>t π[Rt|xt, ut] (1) Q-learning learns a greedy deterministic policy µ(xt) = arg maxu Q(xt, ut), which corresponds to π(ut|xt) = δ(ut = µ(xt)). Let θQ parametrize the action-value function, and β be an arbitrary exploration policy, the learning objective is to minimize the Bellman error, where we fix the target yt: L(θQ) = Ext ρβ,ut β,rt E[(Q(xt, ut|θQ) yt)2] yt = r(xt, ut) + γQ(xt+1, µ(xt+1)) (2) For continuous action problems, Q-learning becomes difficult, because it requires maximizing a complex, nonlinear function at each update. For this reason, continuous domains are often tackled using actor-critic methods (Konda & Tsitsiklis, 1999; Hafner & Riedmiller, 2011; Silver et al., 2014; Lillicrap et al., 2016), where a separate parameterized actor policy π is learned in addition to the Qfunction or value function critic, such as Deep Deterministic Policy Gradient (DDPG) algorithm (Lillicrap et al., 2016). In order to describe our method in the following sections, it will be useful to also define the value function V π(xt, ut) and advantage function Aπ(xt, ut) of a given policy π: V π(xt) = Eri t,xi>t E,ui t π[Rt|xt, ut] Aπ(xt, ut) = Qπ(xt, ut) V π(xt). (3) Model-Based Reinforcement Learning. If we know the dynamics p(xt+1|xt, ut), or if we can approximate them with some learned model ˆp(xt+1|xt, ut), we can use model-based RL and optimal control. While a wide range of model-based RL and control methods have been proposed in the literature (Deisenroth et al., 2013; Kober & Peters, 2012), two are particularly relevant for this work: iterative LQG (i LQG) (Li & Todorov, 2004) and Dyna Q (Sutton, 1990). The i LQG algorithm optimizes trajectories by iteratively constructing locally optimal linear feedback controllers under a local linearization of the dynamics ˆp(xt+1|xt, ut) = N(fxtxt + futut, Ft) and a quadratic expansion of the rewards r(xt, ut) (Tassa et al., 2012). Under linear dynamics and quadratic rewards, the action-value function Q(xt, ut) and value function V (xt) are locally quadratic and can be computed by dynamics programming. The optimal policy can be derived analytically from the quadratic Q(xt, ut) and V (xt) functions, and corresponds to a linear feedback controller g(xt) = ˆut + kt + Kt(xt ˆxt), where kt is an openloop term, Kt is the closed-loop feedback matrix, and ˆxt and ˆut are the states and actions of the nominal trajectory, which is the average trajectory of the controller. Employing Continuous Deep Q-Learning with Model-based Acceleration the maximum entropy objective (Levine & Koltun, 2013), we can also construct a linear-Gaussian controller, where c is a scalar to adjust for arbitrary scaling of the reward magnitudes: πi LQG t (ut|xt) = N(ˆut + kt + Kt(xt ˆxt), c Q 1 u,ut) (4) When the dynamics are not known, a particularly effective way to use i LQG is to combine it with learned time-varying linear models ˆp(xt+1|xt, ut). In this variant of the algorithm, trajectories are sampled from the controller in Equation (4) and used to fit time-varying linear dynamics with linear regression. These dynamics are then used with i LQG to obtain a new controller, typically using a KL-divergence constraint to enforce a trust region, so that the new controller doesn t deviate too much from the region in which the samples were generated (Levine & Abbeel, 2014). Besides enabling i LQG and other planning-based algorithms, a learned model of the dynamics can allow a modelfree algorithm to generate synthetic experience by performing rollouts in the learned model. A particularly relevant method of this type is Dyna-Q (Sutton, 1990), which performs real-world rollouts using the policy π, and then generates synthetic rollouts using a model learned from these samples. The synthetic rollouts originate at states visited by the real-world rollouts, and serve as supplementary data for a variety of possible reinforcement learning algorithms. However, most prior Dyna-Q methods have focused on relatively small, discrete domains. In Section 5, we describe how our method can be extended into a variant of Dyna-Q to achieve substantially faster learning on a range of continuous control tasks with complex neural network policies, and in Section 6, we empirically analyze the sensitivity of this method to imperfect learned dynamics models. 4. Continuous Q-Learning with Normalized Advantage Functions We first propose a simple method to enable Q-learning in continuous action spaces with deep neural networks, which we refer to as normalized advantage functions (NAF). The idea behind normalized advantage functions is to represent the Q-function Q(xt, ut) in Q-learning in such a way that its maximum, arg maxu Q(xt, ut), can be determined easily and analytically during the Q-learning update. While a number of representations are possible that allow for analytic maximization, the one we use in our implementation is based on a neural network that separately outputs a value function term V (x) and an advantage term A(x, u), which is parameterized as a quadratic function of nonlinear fea- tures of the state: Q(x, u|θQ) = A(x, u|θA) + V (x|θV ) A(x, u|θA) = 1 2(u µ(x|θµ))T P (x|θP )(u µ(x|θµ)) P (x|θP ) is a state-dependent, positive-definite square matrix, which is parametrized by P (x|θP ) = L(x|θP )L(x|θP )T , where L(x|θP ) is a lower-triangular matrix whose entries come from a linear output layer of a neural network, with the diagonal terms exponentiated. While this representation is more restrictive than a general neural network function, since the Q-function is quadratic in u, the action that maximizes the Q-function is always given by µ(x|θµ). We use this representation with a deep Q-learning algorithm analogous to Mnih et al. (2015), using target networks and a replay buffers as described by (Lillicrap et al., 2016). NAF, given by Algorithm 1, is considerably simpler than DDPG. Algorithm 1 Continuous Q-Learning with NAF Randomly initialize normalized Q network Q(x, u|θQ). Initialize target network Q with weight θQ θQ. Initialize replay buffer R . for episode=1, M do Initialize a random process N for action exploration Receive initial observation state x1 p(x1) for t=1, T do Select action ut = µ(xt|θµ) + Nt Execute ut and observe rt and xt+1 Store transition (xt, ut, rt, xt+1) in R for iteration=1, I do Sample a random minibatch of m transitions from R Set yi = ri + γV (xi+1|θQ ) Update θQ by minimizing the loss: L = 1 N P i(yi Q(xi, ui|θQ))2 Update the target network: θQ τθQ + (1 τ)θQ end for end for end for Decomposing Q into an advantage term A and a state-value term V was suggested by Baird III (1993); Harmon & Baird III (1996), and was recently explored by Wang et al. (2015) for discrete action problems. Normalized actionvalue functions have also been proposed by Rawlik et al. (2013) in the context of an alternative temporal difference learning algorithm. However, our method is the first to combine such representations with deep neural networks into an algorithm that can be used to learn policies for a range of challenging continuous control tasks. In general, A does not need to be quadratic, and exploring other parametric forms such as multimodal distributions is an interesting avenue for future work. The appendix provides details on an adaptive exploration rule with experimental results. Continuous Deep Q-Learning with Model-based Acceleration 5. Accelerating Learning with Imagination Rollouts While NAF provides some advantages over actor-critic model-free RL methods in continuous domains, we can improve their data efficiency substantially under some additional assumptions by exploiting learned models. We will show that incorporating a particular type of learned model into Q-learning with NAFs significantly improves sample efficiency, while still allowing the final policy to be finetuned with model-free learning to achieve good performance without the limitations of imperfect models. 5.1. Model-Guided Exploration One natural approach to incorporating a learned model into an off-policy algorithm such as Q-learning is to use the learned model to generate good exploratory behaviors using planning or trajectory optimization. To evalaute this idea, we utilize the i LQG algorithm to generate good trajectories under the model, and then mix these trajectories together with on-policy experience by appending them to the replay buffer. Interestingly, we show in our evaluation that, even when planning under the true model, the improvement obtained from this approach is often quite small, and varies significantly across domains and choices of exploration noise. The intuition behind this result is that offpolicy i LQG exploration is too different from the learned policy, and Q-learning must consider alternatives in order to ascertain the optimality of a given action. That is, it s not enough to simply show the algorithm good actions, it must also experience bad actions to understand which actions are better and which are worse. 5.2. Imagination Rollouts As discussed in the previous section, incorporating offpolicy exploration from good, narrow distributions, such as those induced by i LQG, often does not result in significant improvement for Q-learning. These results suggest that Q-learning, which learns a policy based on minimizing temporal differences, inherently requires noisy on-policy actions to succeed. In real-world domains such as robots and autonomous vehicles, this can be undesirable for two reasons: first, it suggests that large amounts of on-policy experience are required in addition to good off-policy samples, and second, it implies that the policy must be allowed to make its own mistakes during training, which might involve taking undesirable or dangerous actions that can damage real-world hardware. One way to avoid these problems while still allowing for a large amount of on-policy exploration is to generate synthetic on-policy trajectories under a learned model. Adding these synthetic samples, which we refer to as imagina- tion rollouts, to the replay buffer effectively augments the amount of experience available for Q-learning. The particular approach we use is to perform rollouts in the real world using a mixture of planned i LQG trajectories and onpolicy trajectories, with various mixing coefficients evaluated in our experiments, and then generate additional synthetic on-policy rollouts using the learned model from each state visited along the real-world rollouts. We show that using iteratively refitted linear models allows us to extend the approach to deep reinforcement learning on a range of continuous control domains. In some scenarios, we can even generate all or most of the real rollouts using offpolicy i LQG controllers, which is desirable in safety-critic domains where poorly trained policies might take dangerous actions. The algorithm is given as Algorithm 2, and is an extension on Algorithm 1 combining model-based RL. Algorithm 2 Imagination Rollouts with Fitted Dynamics and Optional i LQG Exploration Randomly initialize normalized Q network Q(x, u|θQ). Initialize target network Q with weight θQ θQ. Initialize replay buffer R and fictional buffer Rf . Initialize additional buffers B , Bold with size n T. Initialize fitted dynamics model M . for episode = 1, M do Initialize a random process N for action exploration Receive initial observation state x1 Select µ (x, t) from {µ(x|θµ), πi LQG t (ut|xt)} with probabilities {p, 1 p} for t = 1, T do Select action ut = µ (xt, t) + Nt Execute ut and observe rt and xt+1 Store transition (xt, ut, rt, xt+1, t) in R and B if mod (episode T + t, m) = 0 and M = then Sample m (xi, ui, ri, xi+1, i) from Bold Use M to simulate l steps from each sample Store all fictional transitions in Rf end if Sample a random minibatch of m transitions I l times from Rf and I times from R, and update θQ, θQ as in Algorithm 1 per minibatch. end for if Bf is full then M Fit Local Linear Dynamics(Bf) (see Section 5.3) πi LQG i LQG One Step(Bf, M) (see appendix) Bold Bf, Bf end if end for Imagination rollouts can suffer from severe bias when the learned model is inaccurate. For example, we found it very difficult to train nonlinear neural network models for the dynamics that would actually improve the efficiency of Qlearning when used for imagination rollouts. As discussed in the following section, we found that using iteratively refitted time-varying linear dynamics produced substantially better results. In either case, we would still like to preserve Continuous Deep Q-Learning with Model-based Acceleration the generality and optimality of model-free RL while deriving the benefits of model-based learning. To that end, we observe that most of the benefit of model-based learning is derived in the early stages of the learning process, when the policy induced by the neural network Q-function is poor. As the Q-function becomes more accurate, on-policy behavior tends to outperform model-based controllers. We therefore propose to switch off imagination rollouts after a given number of iterations.1 In this framework, the imagination rollouts can be thought of as an inexpensive way to pretrain the Q-function, such that fine-tuning using real world experience can quickly converge to an optimal solution. 5.3. Fitting the Dynamics Model In order to obtain good imagination rollouts and improve the efficiency of Q-learning, we needed to use an effective and data-efficient model learning algorithm. While prior methods propose a variety of model classes, including neural networks (Heess et al., 2015), Gaussian processes (Deisenroth & Rasmussen, 2011), and locally-weighted regression (Atkeson et al., 1997), we found that we could obtain good results by using iteratively refitted time-varying linear models, as proposed by Levine & Abbeel (2014). In this approach, instead of learning a good global model for all states and actions, we aim only to obtain a good local model around the latest set of samples. This approach requires a few additional assumptions: namely, it requires the initial state to be either deterministic or low-variance Gaussian, and it requires the states and actions to all be continuous. To handle domains with more varied initial states, we can use a mixture of Gaussian initial states with separate time-varying linear models for each one. The model itself is given by pt(xt+1|xt, ut) = N(Ft[xt; ut]+ft, Nt). Every n episodes, we refit the parameters Ft, ft, and Nt by fitting a Gaussian distribution at each time step to the vectors [xi t; ui t; xi t+1], where i indicates the sample index, and conditioning this Gaussian on [xt; ut] to obtain the parameters of the linear-Gaussian dynamics at that step. We use n = 5 in our experiments. Although this approach introduces additional assumptions beyond the standard modelfree RL setting, we show in our evaluation that it produces impressive gains in sample efficiency on tasks where it can be applied. 6. Experiments We evaluated our approach on a set of simulated robotic tasks using the Mu Jo Co simulator (Todorov et al., 2012). The tasks were based on the benchmarks described by Lil- 1In future work, it would be interesting to select this iteration adaptively based on the expected relative performance of the Qfunction policy and model-based planning. licrap et al. (2016). Although we attempted to replicate the tasks in previous work as closely as possible, discrepancies in the simulator parameters and the contact model produced results that deviate slightly from those reported in prior work. In all experiments, the input to the policy consisted of the state of the system, defined in terms of joint angles and root link positions. Angles were often converted to sine and cosine encoding. We assume the reward function is given and is not learned for model-based experience. For both our method and the prior DDPG (Lillicrap et al., 2016) algorithm in the comparisons, we used neural networks with two layers of 200 rectified linear units (Re LU) to produce each of the output parameters the Q-function and policy in DDPG, and the value function V , the advantage matrix L, and the mean µ for NAF. Since Q-learning was done with a replay buffer, we applied the Q-learning update 5 times per each step of experience to accelerate learning (I = 5). To ensure a fair comparison, DDPG also updates both the Q-function and policy parameters 5 times per step. 6.1. Normalized Advantage Functions In this section, we compare NAF and DDPG on 10 representative domains from Lillicrap et al. (2016), with three additional domains: a four-legged 3D ant, a six-joint 2D swimmer, and a 2D peg (see the appendix for the descriptions of task domains). We found the most sensitive hyperparameters to be presence or absence of batch normalization, base learning rate for ADAM (Kingma & Ba, 2014) {1e 4, 1e 3, 1e 2}, and exploration noise scale {0.1, 0.3, 1.0}. We report the best performance for each domain. We were unable to achieve good results with the method of Rawlik et al. (2013) on our domains, likely due to the complexity of high-dimensional neural network function approximators. Figure 1b, Figure 1c, and additional figures in the appendix show the performances on the three-joint reacher, peg insertion, and a gripper with mobile base. While the numerical gap in reacher may be small, qualitatively there is also a very noticeable difference between NAF and DDPG. DDPG converges to a solution where the deterministic policy causes the tip to fluctuate continuously around the target, and does not reach it precisely. NAF, on the other hand, learns a smooth policy that makes the tip slow down and stabilize at the target. This difference is more noticeable in peg insertion and moving gripper, as shown by the much faster convergence rate to the optimal solution. Precision is very important in many real-world robotic tasks, and these result suggest that NAF may be preferred in such domains. On locomotion tasks, the performance of the two methods is relatively similar. On the six-joint swimmer task and four-legged ant, NAF slightly outperforms DDPG in Continuous Deep Q-Learning with Model-based Acceleration (a) Example task domains. (b) NAF and DDPG on multi-target reacher. (c) NAF and DDPG on peg insertion. Figure 1. (a) Task domains: top row from left (manipulation tasks: peg, gripper, mobile gripper), bottom row from left (locomotion tasks: cheetah, swimmer6, ant). (b,c) NAF vs DDPG results on three-joint reacher and peg insertion. On reacher, the DDPG policy continuously fluctuates the tip around the target, while NAF stabilizes well at the target. terms of the convergence speed; however, DDPG is faster on cheetah and finds a better policy on walker2d. The loss in performance of NAF can potentially be explained by downside of the mode-seeking behavior, where it is hard to explore other modes once the quadratic advantage function finds a good one. Choosing a parametric form that is more expressive than a quadratic could be used to address this limitation in future work. The results on all of the domains are summarized in Table 1. Overall, NAF outperformed DDPG on the majority of tasks, particularly manipulation tasks that require precision and suffer less from the lack of multimodal Qfunctions. This makes this approach particularly promising for efficient learning of real-world robotic tasks. Domains - DDPG episodes NAF episodes Cartpole -2.1 -0.601 420 -0.604 190 Reacher -2.3 -0.509 1370 -0.331 1260 Peg -11 -0.950 690 -0.438 130 Gripper -29 1.03 2420 1.81 1920 Gripper M -90 -20.2 1350 -12.4 730 Canada2d -12 -4.64 1040 -4.21 900 Cheetah -0.3 8.23 1590 7.91 2390 Swimmer6 -325 -174 220 -172 190 Ant -4.8 -2.54 2450 -2.58 1350 Walker2d 0.3 2.96 850 1.85 1530 Table 1. Best test rewards of DDPG and NAF policies, and the episodes it requires to reach within 5% of the best value. - denotes scores by a random agent. 6.2. Evaluating Best-Case Model-Based Improvement with True Models In order to determine how best to incorporate model-based components to accelerate model-free Q-learning, we tested several approaches using the ground truth dynamics, to control for challenges due to model fitting. We evaluated both of the methods discussed in Section 5: the use of model-based planning to generate good off-policy rollouts in the real world, and the use of the model to generate onpolicy synthetic rollouts. Figure 2a shows the effect of mixing off-policy i LQG experience and imagination rollouts on the three-joint reacher. It is noticeable that mixing the good off-policy experience does not significantly improve data-efficiency, while imagination rollouts always improve data-efficiency or final performance significantly. In the context of Q-learning, this result is not entirely surprising: Q learning must experience both good and bad actions in order to determine which actions are preferred, while the good model-based rollouts are so far removed from the policy in the early stages of learning that they provide little useful information. Figure 2a also evaluates two different variants of the imagination rollouts approach, where the rollouts in the real world are performed either using the learned policy, or using model-based planning with i LQG. In the case of this task, the i LQG rollouts achieve slightly better results, since the on-policy imagination rollouts sampled around these off-policy states provide Q-learning with additional information about alternative action not taken by the i LQG planner. In general, we did not find that off-policy rollouts were consistently better than on-policy rollouts across all tasks, but they did consistently produce good results. Performing off-policy rollouts with i LQG may be desirable in realworld domains, where a partially learned policy might take undesirable or dangerous actions. Further details of these experiments are provided in the appendix. 6.3. Guided Imagination Rollouts with Fitted Dynamics In this section, we evaluated the performance of imagination rollouts with learned dynamics. As seen in Figure 2b, we found that fitting time-varying linear models following the imagination rollout algorithm is substantially better than fitting neural network dynamics models for the tasks we considered. There is a fundamental tension between efficient learning and expressive models like neural nets. We cannot hope to learn useful neural network models with a small number of samples for complex tasks, which makes it difficult to acquire a good model with fewer samples than Continuous Deep Q-Learning with Model-based Acceleration (a) NAF on single-target reacher. (b) NAF on single-target reacher. (c) NAF on single-target gripper. Figure 2. Results on NAF with i LQG-guided exploration and imagination rollouts (a) using true dynamics (b,c) using fitted dynamics. Im R denotes using the imagination rollout with l = 10 steps on the reacher and l = 5 steps on the gripper. i LQG-x indicates mixing x fraction of i LQG episodes. Fitted dynamics uses time-varying linear models with sample size n = 5, except -NN which fits a neural network to global dynamics. are necessary to acquire a good policy. While the model is trained with supervised learning, which is typically more sample efficient, it often needs to represent a more complex function (e.g. rigid body physics). However, having such expressive models is more crucial as we move to improve model accuracy. Figure 2b presents results that compare fitted neural network models with the true dynamics when combined with imagination rollouts. These results indicate that the learned neural network models negate the benefits of imagination rollouts on our domains. To evaluate imagination rollouts with fitted time-varying linear dynamics, we chose single-target variants of two of the manipulation tasks: the reacher and the gripper task. The results are shown in Figure 2b and 2c. We found that imagination rollouts of length 5 to 10 were sufficient for these tasks to achieve significant improvement over the fully model-free variant of NAF. Adding imagination rollouts in these domains provided 2-5 factors of improvement in data efficiency. In order to retain the benefit of model-free learning and allow the policy to continue improving once it exceeds the quality possible under the learned model, we switch off the imagination rollouts after 130 episodes (20,000 steps) on the gripper domain. This produces a small transient drop in the performance of the policy, but the results quickly improve again. Switching off the imagination rollouts also ensures that Qlearning does not diverge after it reaches good values, as were often observed in the gripper. This suggests that imagination rollouts, in contrast to off-policy exploration discussed in the previous section, is an effective method for bootstrapping model-free deep RL. It should be noted that, although time-varying linear models combined with imagination rollouts provide a substantial boost in sample efficiency, this improvement is provided at some cost in generality, since effective fitting of time-varying linear models requires relatively small initial state distributions. With more complex initial state distri- butions, we might cluster the trajectories and fit multiple models to account for different modes. Extending the benefits of time-varying linear models to less restrictive settings is a promising direction and build on prior work (Levine et al., 2016; Fu et al., 2015). That said, our results show that imagination rollouts are a very promising approach to accelerating model-free learning when combined with the right kind of dynamics model. 7. Discussion In this paper, we explored several methods for improving the sample efficiency of model-free deep reinforcement learning. We first propose a method for applying standard Q-learning methods to high-dimensional, continuous domains, using the normalized advantage function (NAF) representation. This allows us to simplify the more standard actor-critic style algorithms, while preserving the benefits of nonlinear value function approximation. We show that, in comparison to recently proposed deep actor-critic algorithms, our method tends to learn faster and acquires more accurate policies. We further explore how model-free RL can be accelerated by incorporating learned models, without sacrificing the optimality of the policy in the face of imperfect model learning. We show that, although Q-learning can incorporate off-policy experience, learning primarily from off-policy exploration (via model-based planning) only rarely improves the overall sample efficiency of the algorithm. We postulate that this caused by the need to observe both successful and unsuccessful actions, in order to obtain an accurate estimate of the Q-function. We demonstrate that an alternative method based on synthetic on-policy rollouts achieves substantially improved sample complexity, but only when the model learning algorithm is chosen carefully. We demonstrate that training neural network models does not provide substantive improvement in our domains, but simple iteratively refitted time-varying linear models do provide substantial improvement on domains where they can be applied. Continuous Deep Q-Learning with Model-based Acceleration Acknowledgement We thank Nicholas Heess for helpful discussion and Tom Erez, Yuval Tassa, Vincent Vanhoucke, and the Google Brain and Deep Mind teams for their support. Atkeson, Christopher G, Moore, Andrew W, and Schaal, Stefan. Locally weighted learning for control. In Lazy learning, pp. 75 113. Springer, 1997. Baird III, Leemon C. Advantage updating. Technical report, DTIC Document, 1993. de Bruin, Tim, Kober, Jens, Tuyls, Karl, and Babuˇska, Robert. The importance of experience replay database composition in deep reinforcement learning. Deep Reinforcement Learning Workshop, NIPS, 2015. Deisenroth, Marc and Rasmussen, Carl E. Pilco: A modelbased and data-efficient approach to policy search. In International Conference on Machine Learning (ICML), pp. 465 472, 2011. Deisenroth, Marc Peter, Neumann, Gerhard, Peters, Jan, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1 142, 2013. Fu, Justin, Levine, Sergey, and Abbeel, Pieter. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. ar Xiv preprint ar Xiv:1509.06841, 2015. Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84(12):137 169, 2011. Harmon, Mance E and Baird III, Leemon C. Multi-player residual advantage learning with general function approximation. Wright Laboratory, WL/AACF, Wright Patterson Air Force Base, OH, pp. 45433 7308, 1996. Hausknecht, Matthew and Stone, Peter. Deep reinforcement learning in parameterized action space. ar Xiv preprint ar Xiv:1511.04143, 2015. Heess, Nicolas, Wayne, Gregory, Silver, David, Lillicrap, Tim, Erez, Tom, and Tassa, Yuval. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (NIPS), pp. 2926 2934, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. Kober, Jens and Peters, Jan. Reinforcement learning in robotics: A survey. In Reinforcement Learning, pp. 579 610. Springer, 2012. Konda, Vijay R and Tsitsiklis, John N. Actor-critic algorithms. In Advances in Neural Information Processing Systems (NIPS), volume 13, pp. 1008 1014, 1999. Koutn ık, Jan, Cuccu, Giuseppe, Schmidhuber, J urgen, and Gomez, Faustino. Evolving large-scale neural networks for vision-based reinforcement learning. In Proceedings of the 15th annual conference on Genetic and evolutionary computation, pp. 1061 1068. ACM, 2013. Lampe, Thomas and Riedmiller, Martin. Approximate model-assisted neural fitted q-iteration. In Neural Networks (IJCNN), 2014 International Joint Conference on, pp. 2698 2704. IEEE, 2014. Levine, Sergey and Abbeel, Pieter. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), pp. 1071 1079, 2014. Levine, Sergey and Koltun, Vladlen. Guided policy search. In International Conference on Machine Learning (ICML), pp. 1 9, 2013. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. JMLR 17, 2016. Li, Weiwei and Todorov, Emanuel. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pp. 222 229, 2004. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. International Conference on Learning Representations (ICLR), 2016. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529 533, 2015. Nguyen, D and Widrow, B. The truck backer upper: An example of self learning in neural networks, 1989. Peters, Jan and Schaal, Stefan. Policy gradient methods for robotics. In International Conference on Intelligent Robots and Systems (IROS), pp. 2219 2225. IEEE, 2006. Peters, Jan, M ulling, Katharina, and Altun, Yasemin. Relative entropy policy search. In AAAI. Atlanta, 2010. Continuous Deep Q-Learning with Model-based Acceleration Rawlik, Konrad, Toussaint, Marc, and Vijayakumar, Sethu. On stochastic optimal control and reinforcement learning by approximate inference. Robotics, pp. 353, 2013. Schaul, Tom, Quan, John, Antonoglou, Ioannis, and Silver, David. Prioritized experience replay. ar Xiv preprint ar Xiv:1511.05952, 2015. Schmidhuber, J urgen. Reinforcement learning in markovian and non-markovian environments. pp. 500 506, 1991. Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I., and Moritz, Philipp. Trust region policy optimization. In International Conference on Machine Learning (ICML), pp. 1889 1897, 2015. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional continuous control using generalized advantage estimation. International Conference on Learning Representations (ICLR), 2016. Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In International Conference on Machine Learning (ICML), 2014. Sutton, Richard S. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In International Conference on Machine Learning (ICML), pp. 216 224, 1990. Sutton, Richard S, Mc Allester, David A, Singh, Satinder P, Mansour, Yishay, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems (NIPS), volume 99, pp. 1057 1063, 1999. Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In International Conference on Intelligent Robots and Systems (IROS), pp. 4906 4913. IEEE, 2012. Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems (IROS), pp. 5026 5033. IEEE, 2012. Wahlstr om, Niklas, Sch on, Thomas B, and Deisenroth, Marc Peter. From pixels to torques: Policy learning with deep dynamical models. ar Xiv preprint ar Xiv:1502.02251, 2015. Wang, Ziyu, de Freitas, Nando, and Lanctot, Marc. Dueling network architectures for deep reinforcement learning. ar Xiv preprint ar Xiv:1511.06581, 2015. Watter, Manuel, Springenberg, Jost, Boedecker, Joschka, and Riedmiller, Martin. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems (NIPS), pp. 2728 2736, 2015.