# adaptive_skills_adaptive_partitions_asap__644a4be0.pdf Adaptive Skills Adaptive Partitions (ASAP) Daniel J. Mankowitz, Timothy A. Mann and Shie Mannor The Technion - Israel Institute of Technology, Haifa, Israel danielm@tx.technion.ac.il, mann.timothy@acm.org, shie@ee.technion.ac.il Timothy Mann now works at Google Deepmind. We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a Robo Cup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch. 1 Introduction Human-decision making involves decomposing a task into a course of action. The course of action is typically composed of abstract, high-level actions that may execute over different timescales (e.g., walk to the door or make a cup of coffee). The decision-maker chooses actions to execute to solve the task. These actions may need to be reused at different points in the task. In addition, the actions may need to be used across multiple, related tasks. Consider, for example, the task of building a city. The course of action to building a city may involve building the foundations, laying down sewage pipes as well as building houses and shopping malls. Each action operates over multiple timescales and certain actions (such as building a house) may need to be reused if additional units are required. In addition, these actions can be reused if a neighboring city needs to be developed (multi-task scenario). Reinforcement Learning (RL) represents actions that last for multiple timescales as Temporally Extended Actions (TEAs) (Sutton et al., 1999), also referred to as options, skills (Konidaris & Barto, 2009) or macro-actions (Hauskrecht, 1998). It has been shown both experimentally (Precup & Sutton, 1997; Sutton et al., 1999; Silver & Ciosek, 2012; Mankowitz et al., 2014) and theoretically (Mann & Mannor, 2014) that TEAs speed up the convergence rates of RL planning algorithms. TEAs are seen as a potentially viable solution to making RL truly scalable. TEAs in RL have become popular in many domains including Robo Cup soccer (Bai et al., 2012), video games (Mann et al., 2015) and Robotics (Fu et al., 2015). Here, decomposing the domains into temporally extended courses of action (strategies in Robo Cup, move combinations in video games and skill controllers in Robotics for example) has generated impressive solutions. From here on in, we will refer to TEAs as skills. A course of action is defined by a policy. A policy is a solution to a Markov Decision Process (MDP) and is defined as a mapping from states to a probability distribution over actions. That is, it tells the RL agent which action to perform given the agent s current state. We will refer to an inter-skill policy as being a policy that tells the agent which skill to execute, given the current state. A truly general skill learning framework must (1) learn skills as well as (2) automatically compose them together (as stated by Bacon & Precup (2015)) and determine where each skill should be executed (the inter-skill policy). This framework should also determine (3) where skills can be reused 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Table 1: Comparison of Approaches to ASAP Automated Skill Automatic Continuous Learning Correcting Learning Skill State Reusable Model with Policy Composition Multitask Skills Misspecification Gradient Learning ASAP (this paper) da Silva et al. 2012 Konidaris & Barto 2009 Bacon & Precup 2015 Eaton & Ruvolo 2013 in different parts of the state space and (4) adapt to changes in the task itself. Finally it should also be able to (5) correct model misspecification (Mankowitz et al., 2014). Whilst different forms of model misspecification exist in RL, we define it here as having an unsatisfactory set of skills and inter-skill policy that provide a sub-optimal solution to a given task. This skill learning framework should be able to correct this misspecification to obtain a near-optimal solution. A number of works have addressed some of these issues separately as shown in Table 1. However, no work, to the best of our knowledge, has combined all of these elements into a truly general skill-learning framework. Our framework entitled Adaptive Skills, Adaptive Partitions (ASAP) is the first of its kind to incorporate all of the above-mentioned elements into a single framework, as shown in Table 1, and solve continuous state MDPs. It receives as input a misspecified model (a sub-optimal set of skills and inter-skill policy). The ASAP framework corrects the misspecification by simultaneously learning a near-optimal skill-set and inter-skill policy which are both stored, in a Bayesian-like manner, within the ASAP policy. In addition, ASAP automatically composes skills together, learns where to reuse them and learns skills across multiple tasks. Main Contributions: (1) The Adaptive Skills, Adaptive Partitions (ASAP) algorithm that automatically corrects a misspecified model. It learns a set of near-optimal skills, automatically composes skills together and learns an inter-skill policy to solve a given task. (2) Learning skills over multiple different tasks by automatically adapting both the inter-skill policy and the skill set. (3) ASAP can determine where skills should be reused in the state space. (4) Theoretical convergence guarantees. 2 Background Reinforcement Learning Problem: A Markov Decision Process is defined by a 5-tuple X, A, R, γ, P where X is the state space, A is the action space, R [ b, b] is a bounded reward function, γ [0, 1] is the discount factor and P : X A [0, 1]X is the transition probability function for the MDP. The solution to an MDP is a policy π : X A which is a function mapping states to a probability distribution over actions. An optimal policy π : X A determines the best actions to take so as to maximize the expected reward. The value function V π(x) = Ea π( |a) R(x, a) + γEx P ( |x,a) V π(x ) defines the expected reward for following a policy π from state x. The optimal expected reward V π (x) is the expected value obtained for following the optimal policy from state x. Policy Gradient: Policy Gradient (PG) methods have enjoyed success in recent years especially in the fields of robotics (Peters & Schaal, 2006, 2008). The goal in PG is to learn a policy πθ that maximizes the expected return J(πθ) = R τ P(τ)R(τ)dτ, where τ is a set of trajectories, P(τ) is the probability of a trajectory and R(τ) is the reward obtained for a particular trajectory. P(τ) is defined as P(τ) = P(x0)ΠT k=0P(xk+1|xk, ak)πθ(ak|xk). Here, xk X is the state at the kth timestep of the trajectory; ak A is the action at the kth timestep; T is the trajectory length. Only the policy, in the general formulation of policy gradient, is parameterized with parameters θ. The idea is then to update the policy parameters using stochastic gradient descent leading to the update rule θt+1 = θt + η J(πθ), where θt are the policy parameters at timestep t, J(πθ) is the gradient of the objective function with respect to the parameters and η is the step size. 3 Skills, Skill Partitions and Intra-Skill Policy Skills: A skill is a parameterized Temporally Extended Action (TEA) (Sutton et al., 1999). The power of a skill is that it incorporates both generalization (due to the parameterization) and temporal abstraction. Skills are a special case of options and therefore inherit many of their useful theoretical properties (Sutton et al., 1999; Precup et al., 1998). Definition 1. A Skill ζ is a TEA that consists of the two-tuple ζ = σθ, p(x) where σθ : X A is a parameterized, intra-skill policy with parameters θ Rd and p : X [0, 1] is the termination probability distribution of the skill. Skill Partitions: A skill, by definition, performs a specialized task on a sub-region of a state space. We refer to these sub-regions as Skill Partitions (SPs) which are necessary for skills to specialize during the learning process. A given set of SPs covering a state space effectively define the inter-skill policy as they determine where each skill should be executed. These partitions are unknown a-priori and are generated using intersections of hyperplane half-spaces (described below). Hyperplanes provide a natural way to automatically compose skills together. In addition, once a skill is being executed, the agent needs to select actions from the skill s intra-skill policy σθ. We next utilize SPs and the intra-skill policy for each skill to construct the ASAP policy, defined in Section 4. We first define a skill hyperplane. Definition 2. Skill Hyperplane (SH): Let ψx,m Rd be a vector of features that depend on a state x X and an MDP environment m. Let βi Rd be a vector of hyperplane parameters. A skill hyperplane is defined as ψT x,mβi = L, where L is a constant. In this work, we interpret hyperplanes to mean that the intersection of skill hyperplane half spaces form sub-regions in the state space called Skill Partitions (SPs), defining where each skill is executed. Figure 1a contains two example skill hyperplanes h1, h2. Skill ζ1 is executed in the SP defined by the intersection of the positive half-space of h1 and the negative half-space of h2. The same argument applies for ζ0, ζ2, ζ3. From here on in, we will refer to skill ζi interchangeably with its index i. Skill hyperplanes have two functions: (1) They automatically compose skills together, creating chainable skills as desired by Bacon & Precup (2015). (2) They define SPs which enable us to derive the probability of executing a skill, given a state x and MDP m. First, we need to be able to uniquely identify a skill. We define a binary vector B = [b1, b2, , b K] {0, 1}K where bk is a Bernoulli random variable and K is the number of skill hyperplanes. We define the skill index i = PK k=1 2k 1bk as a sum of Bernoulli random variables bk. Note that this is but one approach to generate skills (and SPs). In principle this setup defines 2K skills, but in practice, far fewer skills are typically used (see experiments). Furthermore, the complexity of the SP is governed by the VC-dimension. We can now define the probability of executing skill i as a Bernoulli likelihood in Equation 1. P(i|x, m) = P k pk(bk = ik|x, m) . (1) Here, ik {0, 1} is the value of the kth bit of B, x is the current state and m is a description of the MDP. The probability pk(bk = 1|x, m) and pk(bk = 0|x, m) are defined in Equation 2. pk(bk = 1|x, m) = 1 1 + exp( αψT (x,m)βk), pk(bk = 0|x, m) = 1 pk(bk = 1|x, m) . (2) We have made use of the logistic sigmoid function to ensure valid probabilities where ψT x,mβk is a skill hyperplane and α > 0 is a temperature parameter. The intuition here is that the kth bit of a skill, bk = 1, if the skill hyperplane ψT x,mβk > 0 meaning that the skill s partition is in the positive half-space of the hyperplane. Similarly, bk = 0 if ψT x,mβk < 0 corresponding to the negative half-space. Using skill 3 as an example with K = 2 hyperplanes in Figure 1a, we would define the Bernoulli likelihood of executing ζ3 as p(i = 3|x, m) = p1(b1 = 1|x, m) p2(b2 = 1|x, m). Intra-Skill Policy: Now that we can define the probability of executing a skill based on its SP, we define the intra-skill policy σθ for each skill. The Gibb s distribution is a commonly used function to define policies in RL (Sutton et al., 1999). Therefore we define the intra-skill policy for skill i, parameterized by θi Rd as σθi(a|s) = exp (αφT x,aθi) P b A exp (αφT x,bθi) . (3) Here, α > 0 is the temperature, φx,a Rd is a feature vector that depends on the current state x X and action a A. Now that we have a definition of both the probability of executing a skill and an intra-skill policy, we need to incorporate these distributions into the policy gradient setting using a generalized trajectory. Generalized Trajectory: A generalized trajectory is necessary to derive policy gradient update rules with respect to the parameters Θ, β as will be shown in Section 4. A typical trajectory is usually defined as τ = (xt, at, rt, xt+1)T t=0 where T is the length of the trajectory. For a generalized trajectory, our algorithm emits a class it at each timestep t 1, which denotes the skill that was executed. The generalized trajectory is defined as g = (xt, at, it, rt, xt+1)T t=0. The probability of a generalized trajectory, as an extension to the PG trajectory in Section 2, is now, PΘ,β(g) = P(x0) QT t=0 P(xt+1|xt, at)Pβ(it|xt, m)σθi(at|xt), where Pβ(it|xt, m) is the probability of a skill being executed, given the state xt X and environment m at time t 1; σθi(at|xt) is the probability of executing action at A at time t 1 given that we are executing skill i. The generalized trajectory is now a function of two parameter vectors θ and β. 4 Adaptive Skills, Adaptive Partitions (ASAP) Framework The Adaptive Skills, Adaptive Partitions (ASAP) framework simultaneously learns a near-optimal set of skills and SPs (inter-skill policy), given an initially misspecified model. ASAP also automatically composes skills together and allows for a multi-task setting as it incorporates the environment m into its hyperplane feature set. We have previously defined two important distributions Pβ(it|xt, m) and σθi(at|xt) respectively. These distributions are used to collectively define the ASAP policy which is presented below. Using the notion of a generalized trajectory, the ASAP policy can be learned in a policy gradient setting. ASAP Policy: Assume that we are given a probability distribution µ over MDPs with a d-dimensional state-action space and a z-dimensional vector describing each MDP. We define β as a (d + z) K matrix where each column βi represents a skill hyperplane, and Θ is a (d 2K) matrix where each column θj parameterizes an intra-skill policy. Using the previously defined distributions, we now define the ASAP policy. Definition 3. (ASAP Policy). Given K skill hyperplanes, a set of 2K skills Σ = {ζi|i = 1, 2K}, a state space x X, a set of actions a A and an MDP m from a hypothesis space of MDPs, the ASAP policy is defined as, πΘ,β(a|x, m) = i=1 Pβ(i|x, m)σθi(a|x) , (4) where Pβ(i|x, m) and σθi(a|s) are the distributions as defined in Equations 1 and 3 respectively. This is a powerful description for a policy, which resembles a Bayesian approach, as the policy takes into account the uncertainty of the skills that are executing as well as the actions that each skill s intra-skill policy chooses. We now define the ASAP objective with respect to the ASAP policy. ASAP Objective: We defined the policy with respect to a hypothesis space of m MDPs. We now need to define an objective function which takes this hypothesis space into account. Since we assume that we are provided with a distribution µ : M [0, 1] over possible MDP models m M, with a d-dimensional state-action space, we can incorporate this into the ASAP objective function: ρ(πΘ,β) = Z µ(m)J(m)(πΘ,β)dm , (5) where πΘ,β is the ASAP policy and J(m)(πΘ,β) is the expected return for MDP m with respect to the ASAP policy. To simplify the notation, we group all of the parameters into a single parameter vector Ω= [vec(Θ), vec(β)]. We define the expected reward for generalized trajectories g as J(πΩ) = R g PΩ(g)R(g)dg, where R(g) is reward obtained for a particular trajectory g. This is a slight variation of the original policy gradient objective defined in Section 2. We then insert J(πΩ) into Equation 5 and we get the ASAP objective function ρ(πΩ) = Z µ(m)J(m)(πΩ)dm , (6) where J(m)(πΩ) is the expected return for policy πΩin MDP m. Next, we need to derive gradient update rules to learn the parameters of the optimal policy π Ωthat maximizes this objective. ASAP Gradients: To learn both intra-skill policy parameters matrix Θ as well as the hyperplane parameters matrix β (and therefore implicitly the SPs), we derive an update rule for the policy gradient framework with generalized trajectories. The full derivation is in the supplementary material. The first step involves calculating the gradient of the ASAP objective function yielding the ASAP gradient (Theorem 1). Theorem 1. (ASAP Gradient Theorem). Suppose that the ASAP objective function is ρ(πΩ) = R µ(m)J(m)(πΩ)dm where µ(m) is a distribution over MDPs m and J(m)(πΩ) is the expected return for MDP m whilst following policy πΩ, then the gradient of this objective is: Ωρ(πΩ) = Eµ(m) EP (m) Ω (g) i=0 ΩZ(m) Ω (xt, it, at)R(m) , where Z(m) Ω (xt, it, at) = log Pβ(it|xt, m)σθi(at|xt), H(m) is the length of a trajectory for MDP m; R(m) = PH(m) i=0 γiri is the discounted cumulative reward for trajectory H(m) 1. If we are able to derive ΩZ(m) Ω (xt, it, at), then we can estimate the gradient Ωρ(πΩ). We will refer to Z(m) Ω = Z(m) Ω (xt, it, at) where it is clear from context. It turns out that it is possible to derive this term as a result of the generalized trajectory. This yields the gradients ΘZ(m) Ω and βZ(m) Ω in Theorems 2 and 3 respectively. The derivations can be found the supplementary material. Theorem 2. (Θ Gradient Theorem). Suppose that Θ is a (d 2K) matrix where each column θj parameterizes an intra-skill policy. Then the gradient θit Z(m) Ω corresponding to the intra-skill parameters of the ith skill at time t is: θit Z(m) Ω = αφxt,at α P b A φxt,bt exp(αφT xt,btΘit) P b A exp(αφT xt,btΘit) , where α > 0 is the temperature parameter and φxt,at Rd 2K is a feature vector of the current state xt Xt and the current action at At. Theorem 3. (β Gradient Theorem). Suppose that β is a (d + z) K matrix where each column βk represents a skill hyperplane. Then the gradient βk Z(m) Ω corresponding to parameters of the kth hyperplane is: βk,1Z(m) Ω = αψ(xt,m) exp( αψT xt,mβk) 1 + exp( αψTxt,mβk) , βk,0Z(m) Ω = αψxt,m + αψxt,m exp( αψT xt,mβk) 1 + exp( αψTxt,mβk) where α > 0 is the hyperplane temperature parameter, ψT (xt,m)βk is the kth skill hyperplane for MDP m, βk,1 corresponds to locations in the binary vector equal to 1 (bk = 1) and βk,0 corresponds to locations in the binary vector equal to 0 (bk = 0). Using these gradient updates, we can then order all of the gradients into a vector ΩZ(m) Ω = θ1Z(m) Ω . . . θ2k Z(m) Ω , β1Z(m) Ω . . . βk Z(m) Ω and update both the intra-skill policy parameters and hyperplane parameters for the given task (learning a skill set and SPs). Note that the updates occur on a single time scale. This is formally stated in the ASAP Algorithm. 5 ASAP Algorithm We present the ASAP algorithm (Algorithm 1) that dynamically and simultaneously learns skills, the inter-skill policy and automatically composes skills together by learning SPs. The skills (Θ matrix) and SPs (β matrix) are initially arbitrary and therefore form a misspecified model. Line 2 combines 1These expectations can easily be sampled (see supplementary material). the skill and hyperplane parameters into a single parameter vector Ω. Lines 3 7 learns the skill and hyperplane parameters (and therefore implicitly the skill partitions). In line 4 a generalized trajectory is generated using the current ASAP policy. The gradient Ωρ(πΩ) is then estimated in line 5 from this trajectory and updates the parameters in line 6. This is repeated until the skill and hyperplane parameters have converged, thus correcting the misspecified model. Theorem 4 provides a convergence guarantee of ASAP to a local optimum (see supplementary material for the proof). Algorithm 1 ASAP Require: φs,a Rd {state-action feature vector}, ψx,m R(d+z) {skill hyperplane feature vector}, K {The number of hyperplanes}, Θ Rd 2K {An arbitrary skill matrix}, β R(d+z) K {An arbitrary skill hyperplane matrix}, µ(m) {A distribution over MDP tasks} 1: Z = (|d||2K| + |(d + z)K|) {Define the number of parameters} 2: Ω= [vec(Θ), vec(β)] RZ 3: repeat 4: Perform a trial (which may consist of multiple MDP tasks) and obtain x0:H, i0:H, a0:H, r0:H, m0:H {states, skills, actions, rewards, task-specific information} 5: Ωρ(πΩ) = P i=0 ΩZ(m)(Ω)R(m) {T is the task episode length} 6: Ω Ω+ η Ωρ(πΩ) 7: until parameters Ωhave converged 8: return Ω Theorem 4. Convergence of ASAP: Given an ASAP policy π(Ω), an ASAP objective over MDP models ρ(πΩ) as well as the ASAP gradient update rules. If (1) the step-size ηk satisfies lim k ηk = 0 k ηk = ; (2) The second derivative of the policy is bounded and we have bounded rewards; Then, the sequence {ρ(πΩ,k)} k=0 converges such that lim k Ω = 0 almost surely. 6 Experiments The experiments have been performed on four different continuous domains: the Two Rooms (2R) domain (Figure 1b), the Flipped 2R domain (Figure 1c), the Three rooms (3R) domain (Figure 1d) and Robo Cup domains (Figure 1e) that include a one-on-one scenario between a striker and a goalkeeper (R1), a two-on-one scenario of a striker against a goalkeeper and a defender (R2), and a striker against two defenders and a goalkeeper (R3) (see supplementary material). In each experiment, ASAP is provided with a misspecified model; that is, a set of skills and SPs (the inter-skill policy) that achieve degenerate, sub-optimal performance. ASAP corrects this misspecified model in each case to learn a set of near-optimal skills and SPs. For each experiment we implement ASAP using Actor-Critic Policy Gradient (AC-PG) as the learning algorithm 2. The Two-Room and Flipped Room Domains (2R): In both domains, the agent (red ball) needs to reach the goal location (blue square) in the shortest amount of time. The agent receives constant negatives rewards and upon reaching the goal, receives a large positive reward. There is a wall dividing the environment which creates two rooms. The state space is a 4-tuple consisting of the continuous xagent, yagent location of the agent and the xgoal, ygoal location of the center of the goal. The agent can move in each of the four cardinal directions. For each experiment involving the two room domains, a single hyperplane is learned (resulting in two SPs) with a linear feature vector representation ψx,m = [1, xagent, yagent]. In addition, a skill is learned in each of the two SPs. The intra-skill policies are represented as a probability distribution over actions. Automated Hyperplane and Skill Learning: Using ASAP, the agent learned intuitive SPs and skills as seen in Figure 1f and g. Each colored region corresponds to a SP. The white arrows have been superimposed onto the figures to indicate the skills learned for each SP. Since each intra-skill policy is a probability distribution over actions, each skill is unable to solve the entire task on its own. ASAP has taken this into account and has positioned the hyperplane accordingly such that the given skill representation can solve the task. Figure 2a shows that ASAP improves upon the initial misspecified partitioning to attain near-optimal performance compared to executing ASAP on the fixed initial misspecified partitioning and on a fixed approximately optimal partitioning. 2AC-PG works well in practice and can be trivially incorporated into ASAP with convergence guarantees Figure 1: (a) The intersection of skill hyperplanes {h1, h2} form four partitions, each of which defines a skill s execution region (the inter-skill policy). The (b) 2R, (c) Flipped 2R, (d) 3R and (e) Robo Cup domains (with a varying number of defenders for R1,R2,R3). The learned skills and Skill Partitions (SPs) for the (f) 2R, (g) Flipped 2R, (h) 3R and (i) across multiple tasks. Figure 2: Average reward of the learned ASAP policy compared to (1) the approximately optimal SPs and skill set as well as (2) the initial misspecified model. This is for the (a) 2R, (b) 3R, (c) 2R learning across multiple tasks and the (d) 2R without learning by flipping the hyperplane. (e) The average reward of the learned ASAP policy for a varying number of K hyperplanes. (f) The learned SPs and skill set for the R1 domain. (g) The learned SPs using a polynomial hyperplane (1),(2) and linear hyperplane (3) representation. (h) The learned SPs using a polynomial hyperplane representation without the defender s location as a feature (1) and with the defender s x location (2), y location (3), and x, y location as a feature (4). (i) The dribbling behavior of the striker when taking the defender s y location into account. (j) The average reward for the R1 domain. Multiple Hyperplanes: We analyzed the ASAP framework when learning multiple hyperplanes in the two room domain. As seen in Figure 2e, increasing the number of hyperplanes K, does not have an impact on the final solution in terms of average reward. However, it does increase the computational complexity of the algorithm since 2K skills need to be learned. The approximate points of convergence are marked in the figure as K1, K2 and K3, respectively. In addition, two skills dominate in each case producing similar partitions to those seen in Figure 1a (see supplementary material) indicating that ASAP learns that not all skills are necessary to solve the task. Multitask Learning: We first applied ASAP to the 2R domain (Task 1) and attained a near optimal average reward (Figure 2c). It took approximately 35000 episodes to get near-optimal performance and resulted in the SPs and skill set shown in Figure 1i (top). Using the learned SPs and skills, ASAP was then able to adapt and learn a new set of SPs and skills to solve a different task (Flipped 2R - Task 2) in only 5000 episodes (Figure 2c) indicating that the parameters learned from the old task provided a good initialization for the new task. The knowledge transfer is seen in Figure 1i (bottom) as the SPs do not significantly change between tasks, yet the skills are completely relearned. We also wanted to see whether we could flip the SPs; that is, switch the sign of the hyperplane parameters learned in the 2R domain and see whether ASAP can solve the Flipped 2R domain (Task 2) without any additional learning. Due to the symmetry of the domains, ASAP was indeed able to solve the new domain and attained near-optimal performance (Figure 2d). This is an exciting result as many problems, especially navigation tasks, possess symmetrical characteristics. This insight could dramatically reduce the sample complexity of these problems. The Three-Room Domain (3R): The 3R domain (Figure 1d), is similar to the 2R domain regarding the goal, state-space, available actions and rewards. However, in this case, there are two walls, dividing the state space into three rooms. The hyperplane feature vector ψx,m consists of a single fourier feature. The intra-skill policy is a probability distribution over actions. The resulting learned hyperplane partitioning and skill set are shown in Figure 1h. Using this partitioning ASAP achieved near optimal performance (Figure 2b). This experiment shows an insightful and unexpected result. Reusable Skills: Using this hyperplane representation, ASAP was able to not only learn the intra-skill policies and SPs, but also that skill A needed to be reused in two different parts of the state space (Figure 1h). ASAP therefore shows the potential to automatically create reusable skills. Robo Cup Domain: The Robo Cup 2D soccer simulation domain (Akiyama & Nakashima, 2014) is a 2D soccer field (Figure 1e) with two opposing teams. We utilized three Robo Cup sub-domains 3 R1, R2 and R3 as mentioned previously. In these sub-domains, a striker (the agent) needs to learn to dribble the ball and try and score goals past the goalkeeper. State space: R1 domain - the continuous locations of the striker xstriker, ystriker , the ball xball, yball , the goalkeeper xgoalkeeper, ygoalkeeper and the constant goal location xgoal, ygoal . R2 domain - we have the addition of the defender s location xdefender, ydefender to the state space. R3 domain - we add the locations of two defenders. Features: For the R1 domain, we tested both a linear and degree two polynomial feature representation for the hyperplanes. For the R2 and R3 domains, we also utilized a degree two polynomial hyperplane feature representation. Actions: The striker has three actions which are (1) move to the ball (M), (2) move to the ball and dribble towards the goal (D) (3) move to the ball and shoot towards the goal (S). Rewards: The reward setup is consistent with logical football strategies (Hausknecht & Stone, 2015; Bai et al., 2012). Small negative (positive) rewards for shooting from outside (inside) the box and dribbling when inside (outside) the box. Large negative rewards for losing possession and kicking the ball out of bounds. Large positive reward for scoring. Different SP Optimas: Since ASAP attains a locally optimal solution, it may sometimes learn different SPs. For the polynomial hyperplane feature representation, ASAP attained two different solutions as shown in Figure 2g(1) as well as Figure 2g(2), respectively. Both achieve near optimal performance compared to the approximately optimal scoring controller (see supplementary material). For the linear feature representation, the SPs and skill set in Figure 2g(3) is obtained and achieved near-optimal performance (Figure 2j), outperforming the polynomial representation. SP Sensitivity: In the R2 domain, an additional player (the defender) is added to the game. It is expected that the presence of the defender will affect the shape of the learned SPs. ASAP again learns intuitive SPs. However, the shape of the learned SPs change based on the pre-defined hyperplane feature vector ψm,x. Figure 2h(1) shows the learned SPs when the location of the defender is not used as a hyperplane feature. When the x location of the defender is utilized, the flatter SPs are learned in Figure 2h(2). Using the y location of the defender as a hyperplane feature causes the hyperplane offset shown in Figure 2h(3). This is due to the striker learning to dribble around the defender in order to score a goal as seen in Figure 2i. Finally, taking the x, y location of the defender into account results in the squashed SPs shown in Figure 2h(4) clearly showing the sensitivity and adaptability of ASAP to dynamic factors in the environment. 7 Discussion We have presented the Adaptive Skills, Adaptive Partitions (ASAP) framework that is able to automatically compose skills together and learns a near-optimal skill set and skill partitions (the inter-skill policy) simultaneously to correct an initially misspecified model. We derived the gradient update rules for both skill and skill hyperplane parameters and incorporated them into a policy gradient framework. This is possible due to our definition of a generalized trajectory. In addition, ASAP has shown the potential to learn across multiple tasks as well as automatically reuse skills. These are the necessary requirements for a truly general skill learning framework and can be applied to lifelong learning problems (Ammar et al., 2015; Thrun & Mitchell, 1995). An exciting extension of this work is to incorporate it into a Deep Reinforcement Learning framework, where both the skills and ASAP policy can be represented as deep networks. Acknowledgements The research leading to these results has received funding from the European Research Council under the European Union s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n. 306638. 3https://github.com/mhauskn/HFO.git Akiyama, Hidehisa and Nakashima, Tomoharu. Helios base: An open source package for the robocup soccer 2d simulation. In Robo Cup 2013: Robot World Cup XVII, pp. 528 535. Springer, 2014. Ammar, Haitham Bou, Tutunov, Rasul, and Eaton, Eric. Safe policy search for lifelong reinforcement learning with sublinear regret. ar Xiv preprint ar Xiv:1505.05798, 2015. Bacon, Pierre-Luc and Precup, Doina. The option-critic architecture. In NIPS Deep Reinforcement Learning Workshop, 2015. Bai, Aijun, Wu, Feng, and Chen, Xiaoping. Online planning for large mdps with maxq decomposition. In AAMAS, 2012. da Silva, B.C., Konidaris, G.D., and Barto, A.G. Learning parameterized skills. In ICML, 2012. Eaton, Eric and Ruvolo, Paul L. Ella: An efficient lifelong learning algorithm. In Proceedings of the 30th international conference on machine learning (ICML-13), pp. 507 515, 2013. Fu, Justin, Levine, Sergey, and Abbeel, Pieter. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. ar Xiv preprint ar Xiv:1509.06841, 2015. Hausknecht, Matthew and Stone, Peter. Deep reinforcement learning in parameterized action space. ar Xiv preprint ar Xiv:1511.04143, 2015. Hauskrecht, Milos, Meuleau Nicolas et. al. Hierarchical solution of markov decision processes using macro-actions. In UAI, pp. 220 229, 1998. Konidaris, George and Barto, Andrew G. Skill discovery in continuous reinforcement learning domains using skill chaining. In NIPS, 2009. Mankowitz, Daniel J, Mann, Timothy A, and Mannor, Shie. Time regularized interrupting options. Internation Conference on Machine Learning, 2014. Mann, Timothy A and Mannor, Shie. Scaling up approximate value iteration with options: Better policies with fewer iterations. In Proceedings of the 31 st International Conference on Machine Learning, 2014. Mann, Timothy Arthur, Mankowitz, Daniel J, and Mannor, Shie. Learning when to switch between skills in a high dimensional domain. In AAAI Workshop, 2015. Masson, Warwick and Konidaris, George. Reinforcement learning with parameterized actions. ar Xiv preprint ar Xiv:1509.01644, 2015. Peters, Jan and Schaal, Stefan. Policy gradient methods for robotics. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pp. 2219 2225. IEEE, 2006. Peters, Jan and Schaal, Stefan. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21:682 691, 2008. Precup, Doina and Sutton, Richard S. Multi-time models for temporally abstract planning. In Advances in Neural Information Processing Systems 10 (Proceedings of NIPS 97), 1997. Precup, Doina, Sutton, Richard S, and Singh, Satinder. Theoretical results on reinforcement learning with temporally abstract options. In Machine Learning: ECML-98, pp. 382 393. Springer, 1998. Silver, David and Ciosek, Kamil. Compositional Planning Using Optimal Option Models. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, 2012. Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 1999. Sutton, Richard S, Mc Allester, David, Singh, Satindar, and Mansour, Yishay. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057 1063, 2000. Thrun, Sebastian and Mitchell, Tom M. Lifelong robot learning. Springer, 1995.