# what_can_learned_intrinsic_rewards_capture__5a3f087a.pdf What Can Learned Intrinsic Rewards Capture? Zeyu Zheng * 1 Junhyuk Oh * 2 Matteo Hessel 2 Zhongwen Xu 2 Manuel Kroiss 2 Hado van Hasselt 2 David Silver 2 Satinder Singh 2 The objective of a reinforcement learning agent is to behave so as to maximise the sum of a suitable scalar function of state: the reward. These rewards are typically given and immutable. In this paper, we instead consider the proposition that the reward function itself can be a good locus of learned knowledge. To investigate this, we propose a scalable meta-gradient framework for learning useful intrinsic reward functions across multiple lifetimes of experience. Through several proof-of-concept experiments, we show that it is feasible to learn and capture knowledge about long-term exploration and exploitation into a reward function. Furthermore, we show that unlike policy transfer methods that capture how the agent should behave, the learned reward functions can generalise to other kinds of agents and to changes in the dynamics of the environment by capturing what the agent should strive to do. 1. Introduction Reinforcement learning (RL) agents can store knowledge in their policies, value functions, state representations, and models of the environment dynamics. These components can be the loci of knowledge in the sense that they are structures in which knowledge, either learned from experience by the agent s algorithm or given by the agent-designer, can be deposited and reused. The objective of the agent is defined by a reward function, and the goal is to learn to act so as to maximise cumulative rewards. In this paper we consider the proposition that the reward function itself is a good locus of knowledge. This is unusual (but not novel) in that most prior work treats the reward as given and immutable, at *Equal contribution Work done during an internship at Deep Mind. 1University of Michigan 2Deep Mind. Correspondence to: Zeyu Zheng , Junhyuk Oh . Proceedings of the 37 th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s). least as far as the learning algorithm is concerned. In fact, agent designers often do find it convenient to modify the reward function given to the agent to facilitate learning. It is therefore useful to distinguish between two kinds of reward functions (Singh et al., 2010): extrinsic rewards define the task and capture the designer s preferences over agent behaviour, whereas intrinsic rewards serve as helpful signals to improve the learning dynamics of the agent. Most existing work on intrinsic rewards falls into two broad categories: task-dependent and task-independent. Both are typically designed by hand. Hand-designing task-dependent rewards can be fraught with difficulty as even minor misalignment between the actual reward and the intended bias/goals can lead to unintended and sometimes catastrophic consequences (Clark & Amodei, 2016). Task-independent intrinsic rewards are also typically hand-designed, often based on an intuitive understanding of animal/human behaviour or on heuristics on desired exploratory behaviour. It can, however, be hard to match such task-independent intrinsic rewards to the specific learning dynamics induced by the interaction between agent and environment. In this paper, we are interested in the comparatively under-explored possibility of learned (not hand-designed) task-dependent intrinsic rewards. Although there have been a few attempts to learn useful intrinsic rewards from experience (Singh et al., 2009; Zheng et al., 2018), how to capture complex knowledge such as exploration across episodes into a reward function remains an open question. We emphasise that it is not our objective to show that rewards are a better locus of learned knowledge than others; the best locus likely depends on the kind of knowledge that is most useful in a given task. In particular, knowledge captured in rewards provides guidance on what the agent should strive to do while knowledge captured in policies provides guidance on how an agent should behave. Knowledge about what captured in rewards is indirect and thus slower to make an impact on behaviour because it takes effect through learning, while knowledge about how can directly have an immediate impact on behaviour. At the same time, because of its indirectness the former can generalise better to changes in dynamics and different learning agents, as we empirically show in this paper. What Can Learned Intrinsic Rewards Capture? How should we measure the usefulness of a learned reward function? Ideally, we would like to measure the effect the learned reward function has on the learning dynamics. Of course, learning happens over multiple episodes, indeed it happens over an entire lifetime. Therefore, we choose lifetime return, the cumulative extrinsic reward obtained by the agent over its entire lifetime, as the main objective. To this end, we adopt the multi-lifetime setting of the Optimal Rewards Framework (Singh et al., 2009) in which an agent is initialised randomly at the start of each lifetime and then faces a stationary or non-stationary task drawn from some distribution. In this setting, the only knowledge that is transferred across lifetimes is the reward instead of the policy. Specifically, the goal is to learn a single intrinsic reward function that, when used to adapt the agent s policy using a standard episodic RL algorithm, ends up optimising the cumulative extrinsic reward over its lifetime. In previous work, good reward functions were found via exhaustive search, limiting the range of applicability. We develop a more scalable gradient-based method for learning intrinsic rewards by exploiting the fact that the interaction between the policy update and the reward function is differentiable (Zheng et al., 2018). Moreover, unlike the prior work, we parameterise the reward function by a recurrent neural network unrolled over the entire lifetime and train it to maximise lifetime return, which is crucial for the reward function to capture long-term temporal dependencies (e.g., novelty of states across episodes). To handle long-term credit assignment that spans the lifetime, we use a lifetime value function that estimates the remaining lifetime return. Our main contributions and findings are as follows: (1) Through carefully designed environments, we show that learned intrinsic reward functions can capture a rich form of knowledge such as long-term exploration (e.g., exploring uncertain states) and exploitation (e.g., anticipating environment changes) across multiple episodes. To our knowledge, this is the first work that shows the feasibility of learning such complex knowledge into reward functions. (2) We show that what to do knowledge captured by the reward functions can generalise to changing dynamics of the environment and new learning agents, whereas policy transfer methods do not generalise well, which provides insights into the usefulness of rewards as a locus of knowledge. 2. Related Work Hand-designed Rewards There is a long history of work on designing rewards to accelerate learning in reinforcement learning. Reward shaping aims to design task-specific rewards towards known optimal behaviours, typically requiring domain knowledge. Both the benefits (Randl ov & Alstr om, 1998; Ng et al., 1999; Harutyunyan et al., 2015) and the difficulty (Clark & Amodei, 2016) of task-specific re- ward shaping have been studied. On the other hand, many intrinsic rewards have been proposed to encourage exploration, inspired by animal behaviours. Examples include prediction error (Schmidhuber, 1991a;b; Oudeyer et al., 2007; Gordon & Ahissar, 2011; Mirolli & Baldassarre, 2013; Pathak et al., 2017), surprise (Itti & Baldi, 2006), deviation from a default policy (Goyal et al., 2018), weight change (Linke et al., 2019), and state-visitation counts (Sutton, 1990; Poupart et al., 2006; Strehl & Littman, 2008; Bellemare et al., 2016; Ostrovski et al., 2017). Although these kinds of intrinsic rewards are not domain-specific, they are often not wellaligned with the task that the agent is solving, and ignore the effect on the agent s learning dynamics. In contrast, our work aims to learn intrinsic rewards from data that take into account the agent s learning dynamics without requiring prior knowledge from a human. Rewards Learned from Experience There have been a few attempts to learn useful intrinsic rewards from data. Singh et al. (2009) introduced the Optimal Reward Framework which aims to find a good reward function that allows agents to solve a distribution of tasks using exhaustive search. The empirical study only showed simple intrinsic reward functions such as preference over certain objects due to the inefficient exhaustive search method employed. Although there have been follow-up works (Sorg et al., 2010; Guo et al., 2016) that use a gradient-based method, they consider a non-parameteric policy using Monte-Carlo Tree Search. Our work is closely related to LIRPG (Zheng et al., 2018) which proposed a meta-gradient method to learn intrinsic rewards. However, LIRPG considers a single task in a single lifetime with a myopic episode return objective, which is limited in that it does not allow exploration across episodes or generalisation to different agents. In contrast, our approach takes into account both the long-term effect of intrinsic rewards on the learning dynamics and the lifetime history of the agent. We show this is crucial for capturing long-term knowledge, such as seeking for novel states across episodes, which is not achieved in previous work. Finally, unlike AGILE (Bahdanau et al., 2019) which showed that a learned reward function can generalise to unseen instructions in instruction-following RL problems, our work shows new and interesting kind of generalisation: to new agent-environment interfaces and algorithms. Meta-learning for Exploration and Task Adaptation Meta-learning (Schmidhuber et al., 1996; Thrun & Pratt, 1998) has recently received considerable attention in RL. Recent advances include few-shot adaptation (Finn et al., 2017a), few-shot imitation (Finn et al., 2017b; Duan et al., 2017), model adaptation (Clavera et al., 2019), and inverse RL (Xu et al., 2019). In particular, our work is related to the prior work on meta-learning good exploration strategies (Wang et al., 2016; Duan et al., 2016; Stadie et al., What Can Learned Intrinsic Rewards Capture? Lifetime with task Intrinsic Reward Episode 1 Episode 2 Figure 1. Illustration of the proposed intrinsic reward learning framework. The intrinsic reward rη is used to update the agent s parameter θi throughout its lifetime which consists of many episodes. The goal is to find the optimal intrinsic reward parameters η across many lifetimes that maximises the lifetime return (Glife) given any randomly initialised agents and possibly non-stationary tasks drawn from some distribution p(T ). 2018; Xu et al., 2018a) in that both perform temporal credit assignment across episode boundaries by maximising rewards accumulated beyond an episode. Unlike the prior work that aims to directly transfer an exploratory policy, our framework indirectly drives exploration via a reward function which can be reused by different learning agents. Meta-learning Update Rules There have been a few studies that directly meta-learn how to update the agent s parameters via meta-parameters including discount factor and returns (Xu et al., 2018b), auxiliary tasks (Schlegel et al., 2018; Veeriah et al., 2019), unsupervised learning rules (Metz et al., 2019), and RL objectives (Bechtle et al., 2019; Kirsch et al., 2019). Our work also belongs to this category in that our meta-parameters are the reward function used in the agent s update. In particular, our multi-lifetime formulation is similar to ML3 (Bechtle et al., 2019) and Meta Gen RL (Kirsch et al., 2019). However, ML3 cannot generalise to different agent-environment interfaces, whereas intrinsic rewards can as shown in Section 6. In addition, we propose to use the lifetime return as opposed to the myopic episodic objective of ML3 and Meta Gen RL, which is crucial for cross-episode exploration. Cognitive Study on Exploration-Exploitation. Several cognitive science studies on the exploration-exploitation dilemma (Cohen et al., 2007; Wilson et al., 2014) have shown that humans use both a random exploration strategy (Thompson, 1933; Watkins, 1989) and an informationseeking strategy (Gittins, 1974; 1979) when facing uncertainty. Computationally, the former can be easily implemented, whereas the latter usually requires carefully handcrafted methods to guide the agent s behaviour. In this work, we hypothesize and empirically verify that an informationseeking intrinsic reward function can naturally emerge if it is useful for solving the tasks. The condition of being useful resembles a recent study (Dubey & Griffiths, 2019) which posited that a rational agent should explore in a way such that the usefulness of its knowledge is maximised. 3. The Optimal Reward Problem We first introduce some terminology. Agent: A learning system interacting with an environment. On each step t the agent selects an action at and receives from the environment an observation st+1 and an extrinsic reward rt+1 defined by a task T . The agent chooses actions based on a policy πθ(at|st) parameterised by θ. Episode: A finite sequence of agent-environment interactions until the end of the episode defined by the task. An episode return is defined as: Gep = PTep 1 t=0 γtrt+1, where γ is a discount factor, and the random variable Tep gives the number of steps until the end of the episode. Lifetime: A finite sequence of agent-environment interactions until the end of training defined by an agentdesigner, which can consist of multiple episodes. The lifetime return is Glife = PT 1 t=0 γtrt+1, where γ is a discount factor, and T is the number of steps in the lifetime. Intrinsic reward: A reward function rη(τt+1) parameterised by η, where τt = (s0, a0, r1, d1, s1, . . . , rt, dt, st) is a lifetime history with (binary) episode terminations di. The Optimal Reward Problem (Singh et al., 2010), illustrated in Figure 1, aims to learn the parameters of the intrinsic reward such that the resulting rewards achieve a learning dynamic for an RL agent that maximises the lifetime (extrinsic) return on tasks drawn from some distribution. Formally, the objective function is defined as: J(η) = Eθ0 Θ,T p(T ) Eτ pη(τ|θ0) Glife , (1) where Θ and p(T ) are an initial policy distribution and a distribution over possibly non-stationary tasks respectively. The likelihood of a lifetime history τ is pη(τ|θ0) = p(s0) QT 1 t=0 πθt(at|st)p(dt+1, rt+1, st+1|st, at), where θt = f(θt 1, η) is a policy parameter as updated with update function f, which is policy gradient in this paper.1 Note that the optimisation of η spans multiple lifetimes, each of which can span multiple episodes. Using the lifetime return Glife as the objective instead of the conventional episodic return Gep allows exploration across multiple episodes as long as the lifetime return is maximised in the long run. In particular, when the lifetime is defined as a fixed number of episodes, we find that the lifetime return objective is sometimes more beneficial than the episodic 1We assume that the policy parameter is updated after each time-step throughout the paper for brevity. However, the parameter can be updated less frequently in practice. What Can Learned Intrinsic Rewards Capture? Algorithm 1 Learning intrinsic rewards Input: p(T ): Task distribution Input: Θ: Randomly-initialised policy distribution Initialise intrinsic reward η and lifetime value φ repeat Initialise task T p(T ) and policy θ Θ while lifetime not ended do θ0 θ for k = 1, 2, . . . , N do Generate a trajectory using πθk 1 Update policy θk θk 1 + α θk 1Jη(θk 1) using intrinsic rewards rη (Eq. 3) end for Update intrinsic reward function η using Eq. 4 Update lifetime value function φ using Eq. 6 θ θN end while until η converges return objective, even for the episodic return performance measure. However, different objectives (e.g., final episode return) can be considered depending on the definition of what a good reward function is. 4. Meta-Learning Intrinsic Reward We propose a meta-gradient approach (Xu et al., 2018b; Zheng et al., 2018) to solve the optimal reward problem. At a high-level, we sample a new task T and a new random policy parameter θ at each lifetime iteration. We then simulate an agent s lifetime by updating the parameter θ using an intrinsic reward function rη (Section 4.1) with policy gradient (Section 4.2). Concurrently, we compute the meta-gradient by taking into account the effect of the intrinsic rewards on the policy parameters to update the intrinsic reward function with a lifetime value function (Section 4.3). Algorithm 1 gives an overview of our algorithm. The following sections describe the details. 4.1. Architectures The intrinsic reward function is a recurrent neural network (RNN) parameterised by η, which produces a scalar reward on arriving in state st by taking into account the history of an agent s lifetime τt = (s0, a0, r1, d1, s1, ..., rt, dt, st). We claim that giving the lifetime history across episodes as input is crucial for balancing exploration and exploitation, for instance by capturing how frequently a certain state is visited to determine an exploration bonus reward. The lifetime value function is a separate recurrent neural network parameterised by φ, which takes the same inputs as the intrinsic reward function and produces a scalar value estimation of the expected future return within the lifetime. 4.2. Policy Update Each agent interacts with an environment and a task sampled from a distribution T p(T ). However, instead of directly maximising the extrinsic rewards defined by the task, the agent maximises the intrinsic rewards (rη) by using policy gradient (Williams, 1992; Sutton et al., 2000): t=0 γtrη(τt+1) (2) θJη(θ) = Eθ Gep η,t θ log πθ(a|s) , (3) where rη(τt+1) is the intrinsic reward at time t, and Gep η,t = PTep 1 k=t γk trη(τk+1) is the return of the intrinsic rewards accumulated over an episode with discount factor γ. 4.3. Intrinsic Reward and Lifetime Value Update To update the intrinsic reward parameters η, we directly take a meta-gradient ascent step using the overall objective (Equation 1). Specifically, the gradient is (see the supplementary material for derivation) ηJ(η) = Eθt,T Glife t θt log πθt(at|st) ηθt The chain rule is used to get the meta-gradient ( ηθt) as in previous work (Zheng et al., 2018). The computation graph of this procedure is illustrated in Figure 1. Computing the true meta-gradient in Equation 4 requires backpropagation through the entire lifetime, which is infeasible as each lifetime can involve thousands of policy updates. To partially address this issue, we truncate the meta-gradient after N policy updates but approximate the lifetime return Glife,φ t Glife t using a lifetime value function Vφ(τ) parameterised by φ, which is learned using a temporal difference learning from n-step trajectory: Glife,φ t = k=0 γkrt+k+1 + γn Vφ(τt+n) (5) φ = φ + α (Glife,φ t Vφ(τt)) φVφ(τt), (6) where α is a learning rate. In our empirical work, we found that the lifetime value estimates were crucial to allow the intrinsic reward to perform long-term credit assignments across episodes (Section 5.6). 5. Empirical Investigations We present the results from our empirical investigations in two sections. In this section, the experiments and domains are designed to answer the following research questions: What Can Learned Intrinsic Rewards Capture? (a) Empty Rooms (c) Key-Box Figure 2. Illustration of domains. (a) The agent needs to find the goal location which gives a positive reward, but the goal is not visible to the agent. (b) Each object (A, B, and C) gives rewards. (c) The agent is required to first collect the key and visit one of the boxes (A, B, and C) to receive the corresponding reward. All objects are placed in random locations before each episode. What kind of knowledge is learned by the intrinsic reward? How does the distribution of tasks influence the intrinsic reward? What is the benefit of the lifetime return objective over the episode return? When is it important to provide the lifetime history as input to the intrinsic reward? 5.1. Experimental Setup We investigate these research questions in the grid-world domains illustrated in Figure 2. For each domain, we trained an intrinsic reward function across many lifetimes and evaluated it by training an agent using the learned reward. We implemented the following baselines. Extrinsic-EP: A policy is trained with extrinsic rewards to maximise the episode return. Extrinsic-LIFE: A policy is trained with extrinsic rewards to maximise the lifetime return. Count-based (Strehl & Littman, 2008): A policy is trained with extrinsic rewards and count-based exploration bonus rewards. ICM (Pathak et al., 2017): A policy is trained with extrinsic rewards and curiosity rewards based on an inverse dynamics model. Note that these baselines, unlike the learned intrinsic rewards, do not transfer any knowledge across different lifetimes. Throughout Sections 5.2-5.5, we focus on analysing what kind of knowledge is learned by the intrinsic reward depending on the nature of environments. We discuss the benefit of using the lifetime return and considering the lifetime history when learning the intrinsic reward in Section 5.6. The details of implementation and hyperparameters are described in the supplementary material. 5.2. Exploring Uncertain States We designed Empty Rooms (Figure 2a) to see whether the intrinsic reward can learn to encourage exploration of uncertain states like novelty-based exploration methods. The goal is to visit an invisible goal location, which is fixed within each lifetime but varies across lifetimes. An episode terminates when the goal is reached. Each lifetime consists of 200 episodes. From the agent s perspective, its policy should visit the locations suggested by the intrinsic reward. From the intrinsic reward s perspective, it should encourage the agent to go to unvisited locations to locate the goal, and then to exploit that knowledge for the rest of the lifetime. Figure 3 shows that the learned intrinsic reward was more efficient than extrinsic rewards and count-based exploration when training a new agent. We observed that the intrinsic reward learned two interesting strategies as visualised in Figure 4. While the goal is not found, it encourages exploration of unvisited locations, because it learned the knowledge that there exists a rewarding goal location somewhere. Once the goal is found the intrinsic reward encourages the agent to exploit it without further exploration, because it learned that there is only one goal. This result shows that curiosity about uncertain states can naturally emerge when various states can be rewarding in a domain, even when the rewarding states are fixed within an agent s lifetime. 5.3. Exploring Uncertain Objects In the previous domain, we considered uncertainty of where the reward (or goal location) is. We now consider dealing with uncertainty about the value of different objects. In the Random ABC environment (see Figure 2b), for each lifetime the rewards for objects A, B, and C are uniformly sampled from [ 1, 1], [ 0.5, 0], and [0, 0.5] respectively but are held fixed within the lifetime. A good intrinsic reward should learn that: 1) B should be avoided, 2) A and C have uncertain rewards, hence require systematic exploration (first go to one and then the other), and 3) once it is determined which of the two A or C is better, exploit that knowledge by encouraging the agent to repeatedly go to that object for the rest of the lifetime. Figure 3 shows that the agent learned a near-optimal exploration-and-then-exploitation method with the learned intrinsic reward. Note that the agent cannot pass information about the reward for objects across episodes, as usual in reinforcement learning. The intrinsic reward can propagate such information across episodes and help the agent explore or exploit appropriately. We visualised the learned intrinsic reward for different actions sequences in Figure 5. The intrinsic rewards encourage the agent to explore towards A and C in the first few episodes. Once A and C are explored, the agent exploits the largest rewarding object. Throughout training, the agent is discouraged to visit B through negative What Can Learned Intrinsic Rewards Capture? 0 100 200 Num episodes Episode return Empty Rooms 0 2500 5000 0 250 500 750 1000 Non-stationary ABC Learned (ours) Extrinsic-EP Extrinsic-LIFE Count-based ICM Near-optimal Figure 3. Evaluation of different reward functions averaged over 30 seeds. The learning curves show agents trained with our intrinsic reward (blue), with the extrinsic reward using the episodic return objective (orange) or the lifetime return objective (brown), and with a count-based exploration reward (green). The dashed line corresponds to a hand-designed near-optimal exploration strategy. (b) Intrinsic Figure 4. Visualisation of the first 3000 steps of an agent trained with different reward functions in Empty Rooms. (a) The blue and yellow squares represent the agent and the hidden goal, respectively. (b) The learned reward encourages the agent to visit many locations if the goal is not found (top). However, when the goal is found early, the intrinsic reward makes the agent exploit it without further exploration (bottom). (c-d) Both the count-based and ICM rewards tend to encourage exploration (top) but hinders exploitation when the goal is found (bottom). intrinsic rewards. These results show that avoidance and curiosity about uncertain objects can potentially emerge if the environment has various or fixed rewarding objects. 5.4. Exploiting Invariant Causal Relationship To see how the intrinsic reward deals with causal relationship between objects, we designed Key-Box , which is similar to Random ABC except that there is a key in the room (see Figure 2c). The agent needs to collect the key first to open one of the boxes (A, B, and C) and receive the corresponding reward. The rewards for the objects are sampled from the same distribution as Random ABC. The key itself gives a neutral reward of 0. Moreover, the locations of the agent, the key, and the boxes are randomly sampled for each episode. As a result, the state space contains more than 3 billion distinct states and thus is infeasible to fully enumerate. Figure 3 shows that learned intrinsic reward leads to a near-optimal exploration. The agent trained with extrinsic rewards did not learn to open any box. The intrinsic reward Episode 2 Episode 3 Episode 2 Episode 3 A=0.2 B=-0.5 C=0.1 Figure 5. Visualisation of the learned intrinsic reward in Random ABC, where the extrinsic rewards for A, B, and C are 0.2, -0.5, and 0.1 respectively. Each figure shows the sum of intrinsic rewards for a trajectory towards each object (A, B, and C). In the first episode, the intrinsic reward encourages the agent to explore A. In the second episode, the intrinsic reward encourages exploring C if A is visited (top) or vice versa (bottom). In episode 3, after both A and C are explored, the intrinsic reward encourages revisiting A (both top and bottom). captures that the key is necessary to open any box, which is true across many lifetimes of training. This demonstrates that the intrinsic reward can capture causal relationships between objects when the domain has this kind of invariant dynamics. 5.5. Dealing with Non-stationarity We investigated how the intrinsic reward handles nonstationary tasks within a lifetime in our Non-stationary ABC environment. Rewards are as follows: for A is either 1 or 1, for B is 0.5, for C is the negative value of the reward for A. The rewards of A and C are swapped every 250 episodes. Each lifetime lasts 1000 episodes. Figure 3 shows that the agent with the learned intrinsic reward quickly recovered its performance when the task changes, whereas the baselines take more time to recover. Figure 6 shows how the learned intrinsic reward encourages the learning agent to react to the changing rewards. Interestingly, the intrinsic reward has learned to prepare for the change by giving negative rewards to the exploitation policy of the agent a few episodes before the task changes. In other words, the What Can Learned Intrinsic Rewards Capture? 470 480 490 500 510 520 530 Episodes Episode Return Intrinsic Extrinsic 470 480 490 500 510 520 530 Episodes Figure 6. Visualisation of the agent s intrinsic and extrinsic rewards (left) and the entropy of its policy (right) on Non-stationary ABC. The task changes at 500th episode (dashed vertical line). The intrinsic reward gives a negative reward even before the task changes (green rectangle) and makes the policy less deterministic (entropy increases). As a result, the agent quickly adapts to the change. 0 100 200 Num episodes Episode return Empty Rooms 0 2500 5000 0 250 500 750 1000 Non-stationary ABC LSTM-Lifetime LSTM-Episode FF-Episode Figure 7. Evaluation of different intrinsic reward architectures and objectives. For LSTM the reward network has an LSTM taking the lifetime history as input. For FF a feed-forward reward network takes only the current time-step. Lifetime and Episode means the lifetime and episodic return as objective respectively. intrinsic reward reduces the agent s commitment to the current best rewarding object, thereby increasing entropy in the current policy in anticipation of the change, eventually making it easier to adapt quickly. This shows that the intrinsic reward can capture the (regularly) repeated non-stationarity across many lifetimes and make the agent intrinsically motivated not to commit too firmly to a policy, in anticipation of changes in the environment. 5.6. Ablation Study To study relative benefits of the proposed technical ideas, we conducted an ablation study 1) by replacing the longterm lifetime return objective (Glife) with the episodic return (Gep) and 2) by restricting the input of the reward network to the current time-step instead of the entire lifetime history. Figure 7 shows that the lifetime history was crucial to achieve good performance. This is reasonable because all domains require some past information (e.g., object rewards in Random ABC, visited locations in Empty Rooms) to provide useful exploration strategies. It is also shown that the lifetime return objective was beneficial on Random ABC, Non-stationary ABC, and Key-Box. These domains require exploration across multiple episodes in order to find the optimal policy. For example, collecting an uncertain object (e.g., object A in Random ABC) is necessary even if the episode terminates with a negative reward. The episodic value function would directly penalise such an under-performing exploratory episode when computing meta-gradient, which prevents the intrinsic reward from learning to encourage exploration across episodes. On the other hand, such behaviour can be encouraged by the lifetime value function, as long as it provides useful information to maximise the lifetime return in the long term. 6. Generalisation via Rewards As noted above, rewards capture knowledge about what an agent s goals should be rather than how it should behave. At the same time, transferring the latter in the form of policies is also feasible in our domains presented above. Here we confirm it by implementing and presenting results for the following two meta-learning methods: MAML (Finn et al., 2017a): A policy meta-learned from a distributions of tasks such that it can adapt quickly to the given task after a few parameter updates. RL2 (Duan et al., 2016; Wang et al., 2016): An RNN policy unrolled over the entire lifetime to maximise the lifetime return, which is pre-trained on a distributions of tasks. Although all the methods we implemented including ours are designed to learn useful knowledge from a distribution of tasks, they have different objectives. Specifically, the objective of our method is to learn knowledge that is useful for training randomly-initialised policies by capturing what to do , whereas the goal of policy transfer methods is to directly transfer a useful policy for fast task adaptation What Can Learned Intrinsic Rewards Capture? 0 100 200 Num episodes Episode return Empty Rooms 0 2500 5000 0 250 500 750 1000 Non-stationary ABC Intrinsic (ours) RL2 Figure 8. Comparison to policy transfer methods. 0 10 20 30 40 50 Episode Return Original Permuted Extended (a) Action space 0 50 100 150 200 Episode Return AC-Intrinsic Q-Intrinsic Q-Extrinsic (b) Algorithm 0 10 20 30 40 50 Episode Return Permuted Random ABC (c) Comparison to baselines Figure 9. Generalisation to new agent-environment interfaces in Random ABC. (a) Permuted agents have different action semantics. Extended agents have additional actions. (b) AC-Intrinsic is the original actor-critic agent trained with the intrinsic reward. Q-Intrinsic is a Q-learning agent with the intrinsic reward learned from actor-critic agents. Q-Extrinsic is the Q-learning agent with the extrinsic reward. (c) The performance of the policy transfer baselines with permuted actions during evaluation. by transferring how to do knowledge. In fact, it can be more efficient to transfer and reuse pre-trained policies instead of restarting from a random policy and learning using the learned rewards given a new task. Figure 8 indeed shows that RL2 performs better than our intrinsic reward approach. It is also shown that MAML and RL2 achieve good performance from the beginning, as they have already learned how to navigate the grid worlds and how to achieve the goals of the tasks. In our method, on the other hand, the agent starts from a random policy and relies on the learned intrinsic reward which indirectly tells it what to do. Nevertheless, our method outperforms MAML and achieves a comparable asymptotic performance to RL2. 6.1. Generalise to New Agent-Environment Interfaces In fact, our method can be interpreted as an instance of RL2 with a particular decomposition of parameters (θ and η), which uses policy gradient as a recurrent update (see Figure 1). While this modular structure may not be more beneficial than RL2 when evaluated with the same agentenvironment interface, such a decomposition provides clear semantics of each module: the policy (θ) captures how to do while the intrinsic reward (η) captures what to do , and this enables interesting kinds of generalisations as we show below. Specifically, we show that what knowledge captured by the intrinsic reward can be reused by many different learning agents as follows. Generalise to Unseen Action Spaces We first evaluated the learned intrinsic reward on new action spaces. Specifi- cally, the intrinsic reward was used to train new agents with either 1) permuted actions, where the semantics of left/right and up/down are reversed, or 2) extended actions, with 4 additional actions that move diagonally. Figure 9a shows that the intrinsic reward provided useful rewards to new agents with different actions, though it was not trained with those actions. This is feasible because the intrinsic reward assigns rewards to the agent s state changes rather than its actions. The intrinsic reward captures what to do , which makes it feasible to generalise to new actions, as long as the goal remains the same. On the other hand, it is unclear how to generalise RL2 and MAML in this way. Generalise to Unseen Learning Algorithms We further investigated how general the learned intrinsic reward is by evaluating it on agents with different learning algorithms. Specifically, after training the intrinsic reward from actorcritic agents, we evaluated it by training new agents through Q-learning while using the learned intrinsic reward as denoted by Q-Intrinsic in Figure 9b. Interestingly, it turns out that the learned intrinsic reward is general enough to be useful for Q-learning agents, even though it was trained for actor-critic agents. Again, it is unclear how to generalise RL2 and MAML in this way. Comparison to Policy Transfer Although it is impossible to apply the learned policy from RL2 and MAML when we extend the action space or when we change the learning algorithm, we can do so when we only permute the actions. As shown in Figure 9c, both RL2 and MAML generalise What Can Learned Intrinsic Rewards Capture? poorly when the action space is permuted for Random ABC, because the transferred policies are highly biased to the original action space. Again, this result highlights the difference between what to do knowledge captured by our approach and how to do knowledge captured by policies. 7. Conclusion We revisited the optimal reward problem (Singh et al., 2009) and proposed a more scalable gradient-based method for learning intrinsic rewards across lifetimes. Through several proof-of-concept experiments, we showed that the learned non-stationary intrinsic reward can capture regularities within a distribution of environments or, over time, within a non-stationary environment. As a result, they were capable of encouraging both exploratory and exploitative behaviour across multiple episodes. In addition, some taskindependent notions of intrinsic motivation such as curiosity emerged when they were effective for the distribution over tasks across lifetimes the agent was trained on. We also showed that the learned intrinsic rewards can generalise to different agent-environment interfaces such as different action spaces and different learning algorithms, whereas policy transfer methods fail to generalise to such changes. This highlights the difference between the what kind of knowledge captured by rewards and the how kind of knowledge captured by policies. The flexibility and range of knowledge captured by intrinsic rewards in our proof-of-concept experiments encourages further work towards combining different loci of knowledge to achieve greater practical benefits. Acknowledgement We thank Joseph Modayil for his helpful feedback on the manuscript. Bahdanau, D., Hill, F., Leike, J., Hughes, E., Hosseini, A., Kohli, P., and Grefenstette, E. Learning to understand goal specifications by modelling reward. In International Conference on Learning Representations, 2019. Bechtle, S., Molchanov, A., Chebotar, Y., Grefenstette, E., Righetti, L., Sukhatme, G., and Meier, F. Meta-learning via learned loss. ar Xiv preprint ar Xiv:1906.05374, 2019. Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471 1479, 2016. Clark, J. and Amodei, D. Faulty reward functions in the wild. Co RR, 2016. URL https://blog.openai.com/. Clavera, I., Nagabandi, A., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., and Finn, C. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. In International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=Hyztso C5Y7. Cohen, J. D., Mc Clure, S. M., and Yu, A. J. Should i stay or should i go? how the human brain manages the tradeoff between exploitation and exploration. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481):933 942, 2007. Duan, Y., Schulman, J., Chen, X., Bartlett, P. L., Sutskever, I., and Abbeel, P. RL2: Fast reinforcement learning via slow reinforcement learning. ar Xiv preprint ar Xiv:1611.02779, 2016. Duan, Y., Andrychowicz, M., Stadie, B., Ho, O. J., Schneider, J., Sutskever, I., Abbeel, P., and Zaremba, W. Oneshot imitation learning. In Advances in Neural Information Processing Systems, pp. 1087 1098, 2017. Dubey, R. and Griffiths, T. L. Reconciling novelty and complexity through a rational analysis of curiosity. Psychological Review, 2019. Finn, C., Abbeel, P., and Levine, S. Model-agnostic metalearning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126 1135. JMLR. org, 2017a. Finn, C., Yu, T., Zhang, T., Abbeel, P., and Levine, S. Oneshot visual imitation learning via meta-learning. In Conference on Robot Learning, pp. 357 368, 2017b. Gittins, J. A dynamic allocation index for the sequential design of experiments. Progress in statistics, pp. 241 266, 1974. Gittins, J. C. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society: Series B (Methodological), 41(2):148 164, 1979. Gordon, G. and Ahissar, E. Reinforcement active learning hierarchical loops. In The 2011 International Joint Conference on Neural Networks, pp. 3008 3015. IEEE, 2011. Goyal, A., Islam, R., Strouse, D., Ahmed, Z., Larochelle, H., Botvinick, M., Bengio, Y., and Levine, S. Infobot: Transfer and exploration via the information bottleneck. In International Conference on Learning Representations, 2018. Guo, X., Singh, S., Lewis, R., and Lee, H. Deep learning for reward design to improve monte carlo tree search in atari games. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 1519 1525. AAAI Press, 2016. What Can Learned Intrinsic Rewards Capture? Harutyunyan, A., Devlin, S., Vrancx, P., and Nowe, A. Expressing arbitrary reward functions as potential-based advice. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2652 2658. AAAI Press, 2015. Itti, L. and Baldi, P. F. Bayesian surprise attracts human attention. In Advances in neural information processing systems, pp. 547 554, 2006. Kirsch, L., van Steenkiste, S., and Schmidhuber, J. Improving generalization in meta reinforcement learning using learned objectives. In International Conference on Learning Representations, 2019. Linke, C., Ady, N. M., White, M., Degris, T., and White, A. Adapting behaviour via intrinsic reward: A survey and empirical study. ar Xiv preprint ar Xiv:1906.07865, 2019. Metz, L., Maheswaranathan, N., Cheung, B., and Sohl Dickstein, J. Meta-learning update rules for unsupervised representation learning. In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=Hk NDsi C9KQ. Mirolli, M. and Baldassarre, G. Functions and mechanisms of intrinsic motivations. In Intrinsically Motivated Learning in Natural and Artificial Systems, pp. 49 72. Springer, 2013. Ng, A. Y., Harada, D., and Russell, S. J. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, pp. 278 287. Morgan Kaufmann Publishers Inc., 1999. Ostrovski, G., Bellemare, M. G., van den Oord, A., and Munos, R. Count-based exploration with neural density models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2721 2730. JMLR. org, 2017. Oudeyer, P.-Y., Kaplan, F., and Hafner, V. V. Intrinsic motivation systems for autonomous mental development. IEEE transactions on evolutionary computation, 11(2): 265 286, 2007. Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2778 2787. JMLR. org, 2017. Poupart, P., Vlassis, N., Hoey, J., and Regan, K. An analytic solution to discrete bayesian reinforcement learning. In Proceedings of the 23rd International Conference on Machine Learning, pp. 697 704. ACM, 2006. Randl ov, J. and Alstr om, P. Learning to drive a bicycle using reinforcement learning and shaping. In Proceedings of the Fifteenth International Conference on Machine Learning, pp. 463 471. Morgan Kaufmann Publishers Inc., 1998. Schlegel, M., Patterson, A., White, A., and White, M. Discovery of predictive representations with a network of general value functions, 2018. URL https:// openreview.net/forum?id=ry ZEl GZ0Z. Schmidhuber, J. Curious model-building control systems. In Proc. international joint conference on neural networks, pp. 1458 1463, 1991a. Schmidhuber, J. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pp. 222 227, 1991b. Schmidhuber, J., Zhao, J., and Wiering, M. Simple principles of metalearning. Technical report IDSIA, 69:1 23, 1996. Singh, S., Lewis, R. L., and Barto, A. G. Where do rewards come from. In Proceedings of the annual conference of the cognitive science society, pp. 2601 2606. Cognitive Science Society, 2009. Singh, S., Lewis, R. L., Barto, A. G., and Sorg, J. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development, 2(2):70 82, 2010. Sorg, J., Lewis, R. L., and Singh, S. Reward design via online gradient ascent. In Advances in Neural Information Processing Systems, pp. 2190 2198, 2010. Stadie, B., Yang, G., Houthooft, R., Chen, P., Duan, Y., Wu, Y., Abbeel, P., and Sutskever, I. The importance of sampling inmeta-reinforcement learning. In Advances in Neural Information Processing Systems, pp. 9280 9290, 2018. Strehl, A. L. and Littman, M. L. An analysis of modelbased interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309 1331, 2008. Sutton, R. S. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, pp. 216 224. Morgan Kaufmann, 1990. Sutton, R. S., Mc Allester, D. A., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, pp. 1057 1063, 2000. What Can Learned Intrinsic Rewards Capture? Thompson, W. R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285 294, 1933. Thrun, S. and Pratt, L. Learning to learn: Introduction and overview. In Learning to learn, pp. 3 17. Springer, 1998. Veeriah, V., Hessel, M., Xu, Z., Rajendran, J., Lewis, R. L., Oh, J., van Hasselt, H. P., Silver, D., and Singh, S. Discovery of useful questions as auxiliary tasks. In Advances in Neural Information Processing Systems, pp. 9306 9317, 2019. Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. M. Learning to reinforcement learn. Ar Xiv, abs/1611.05763, 2016. Watkins, C. J. C. H. Learning from delayed rewards. 1989. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229 256, 1992. Wilson, R. C., Geana, A., White, J. M., Ludvig, E. A., and Cohen, J. D. Humans use directed and random exploration to solve the explore exploit dilemma. Journal of Experimental Psychology: General, 143(6):2074, 2014. Xu, K., Ratner, E., Dragan, A., Levine, S., and Finn, C. Learning a prior over intent via meta-inverse reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, pp. 6952 6962, 2019. Xu, T., Liu, Q., Zhao, L., and Peng, J. Learning to explore via meta-policy gradient. In International Conference on Machine Learning, pp. 5459 5468, 2018a. Xu, Z., van Hasselt, H. P., and Silver, D. Meta-gradient reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2396 2407, 2018b. Zheng, Z., Oh, J., and Singh, S. On learning intrinsic rewards for policy gradient methods. In Advances in Neural Information Processing Systems, pp. 4644 4654, 2018.