# value_iteration_networks__22798337.pdf Value Iteration Networks Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel Dept. of Electrical Engineering and Computer Sciences, UC Berkeley We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains. 1 Introduction Over the last decade, deep convolutional neural networks (CNNs) have revolutionized supervised learning for tasks such as object recognition, action recognition, and semantic segmentation [3, 15, 6, 19]. Recently, CNNs have been applied to reinforcement learning (RL) tasks with visual observations such as Atari games [21], robotic manipulation [18], and imitation learning (IL) [9]. In these tasks, a neural network (NN) is trained to represent a policy a mapping from an observation of the system s state to an action, with the goal of representing a control strategy that has good long-term behavior, typically quantified as the minimization of a sequence of time-dependent costs. The sequential nature of decision making in RL is inherently different than the one-step decisions in supervised learning, and in general requires some form of planning [2]. However, most recent deep RL works [21, 18, 9] employed NN architectures that are very similar to the standard networks used in supervised learning tasks, which typically consist of CNNs for feature extraction, and fully connected layers that map the features to a probability distribution over actions. Such networks are inherently reactive, and in particular, lack explicit planning computation. The success of reactive policies in sequential problems is due to the learning algorithm, which essentially trains a reactive policy to select actions that have good long-term consequences in its training domain. To understand why planning can nevertheless be an important ingredient in a policy, consider the grid-world navigation task depicted in Figure 1 (left), in which the agent can observe a map of its domain, and is required to navigate between some obstacles to a target position. One hopes that after training a policy to solve several instances of this problem with different obstacle configurations, the policy would generalize to solve a different, unseen domain, as in Figure 1 (right). However, as we show in our experiments, while standard CNN-based networks can be easily trained to solve a set of such maps, they do not generalize well to new tasks outside this set, because they do not understand the goal-directed nature of the behavior. This observation suggests that the computation learned by reactive policies is different from planning, which is required to solve a new task1. 1In principle, with enough training data that covers all possible task configurations, and a rich enough policy representation, a reactive policy can learn to map each task to its optimal policy. In practice, this is often too expensive, and we offer a more data-efficient approach by exploiting a flexible prior about the planning computation underlying the behavior. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Figure 1: Two instances of a grid-world domain. Task is to move to the goal between the obstacles. In this work, we propose a NN-based policy that can effectively learn to plan. Our model, termed a value-iteration network (VIN), has a differentiable planning program embedded within the NN structure. The key to our approach is an observation that the classic value-iteration (VI) planning algorithm [1, 2] may be represented by a specific type of CNN. By embedding such a VI network module inside a standard feed-forward classification network, we obtain a NN model that can learn the parameters of a planning computation that yields useful predictions. The VI block is differentiable, and the whole network can be trained using standard backpropagation. This makes our policy simple to train using standard RL and IL algorithms, and straightforward to integrate with NNs for perception and control. Connections between planning algorithms and recurrent NNs were previously explored by Ilin et al. [12]. Our work builds on related ideas, but results in a more broadly applicable policy representation. Our approach is different from model-based RL [25, 4], which requires system identification to map the observations to a dynamics model, which is then solved for a policy. In many applications, including robotic manipulation and locomotion, accurate system identification is difficult, and modelling errors can severely degrade the policy performance. In such domains, a model-free approach is often preferred [18]. Since a VIN is just a NN policy, it can be trained model free, without requiring explicit system identification. In addition, the effects of modelling errors in VINs can be mitigated by training the network end-to-end, similarly to the methods in [13, 11]. We demonstrate the effectiveness of VINs within standard RL and IL algorithms in various problems, among which require visual perception, continuous control, and also natural language based decision making in the Web Nav challenge [23]. After training, the policy learns to map an observation to a planning computation relevant for the task, and generate action predictions based on the resulting plan. As we demonstrate, this leads to policies that generalize better to new, unseen, task instances. 2 Background In this section we provide background on planning, value iteration, CNNs, and policy representations for RL and IL. In the sequel, we shall show that CNNs can implement a particular form of planning computation similar to the value iteration algorithm, which can then be used as a policy for RL or IL. Value Iteration: A standard model for sequential decision making and planning is the Markov decision process (MDP) [1, 2]. An MDP M consists of states s S, actions a A, a reward function R(s, a), and a transition kernel P(s |s, a) that encodes the probability of the next state given the current state and action. A policy π(a|s) prescribes an action distribution for each state. The goal in an MDP is to find a policy that obtains high rewards in the long term. Formally, the value V π(s) of a state under policy π is the expected discounted sum of rewards when starting from that state and executing policy π, V π(s) .= Eπ [P t=0 γtr(st, at)| s0 = s], where γ (0, 1) is a discount factor, and Eπ denotes an expectation over trajectories of states and actions (s0, a0, s1, a1 . . . ), in which actions are selected according to π, and states evolve according to the transition kernel P(s |s, a). The optimal value function V (s) .= maxπ V π(s) is the maximal long-term return possible from a state. A policy π is said to be optimal if V π (s) = V (s) s. A popular algorithm for calculating V and π is value iteration (VI): Vn+1(s) = maxa Qn(s, a) s, where Qn(s, a) = R(s, a) + γ P s P(s |s, a)Vn(s ). (1) It is well known that the value function Vn in VI converges as n to V , from which an optimal policy may be derived as π (s) = arg maxa Q (s, a). Convolutional Neural Networks (CNNs) are NNs with a particular architecture that has proved useful for computer vision, among other domains [8, 16, 3, 15]. A CNN is comprised of stacked convolution and max-pooling layers. The input to each convolution layer is a 3dimensional signal X, typically, an image with l channels, m horizontal pixels, and n vertical pixels, and its output h is a l -channel convolution of the image with kernels W 1, . . . , W l , hl ,i ,j = σ P l,i,j W l l,i,j Xl,i i,j j , where σ is some scalar activation function. A max-pooling layer selects, for each channel l and pixel i, j in h, the maximum value among its neighbors N(i, j), hmaxpool l,i,j = maxi ,j N(i,j) hl,i ,j . Typically, the neighbors N(i, j) are chosen as a k k image patch around pixel i, j. After max-pooling, the image is down-sampled by a constant factor d, commonly 2 or 4, resulting in an output signal with l channels, m/d horizontal pixels, and n/d vertical pixels. CNNs are typically trained using stochastic gradient descent (SGD), with backpropagation for computing gradients. Reinforcement Learning and Imitation Learning: In MDPs where the state space is very large or continuous, or when the MDP transitions or rewards are not known in advance, planning algorithms cannot be applied. In these cases, a policy can be learned from either expert supervision IL, or by trial and error RL. While the learning algorithms in both cases are different, the policy representations which are the focus of this work are similar. Additionally, most state-of-the-art algorithms such as [24, 21, 26, 18] are agnostic to the policy representation, and only require it to be differentiable, for performing gradient descent on some algorithm-specific loss function. Therefore, in this paper we do not commit to a specific learning algorithm, and only consider the policy. Let φ(s) denote an observation for state s. The policy is specified as a parametrized function πθ(a|φ(s)) mapping observations to a probability over actions, where θ are the policy parameters. For example, the policy could be represented as a neural network, with θ denoting the network weights. The goal is to tune the parameters such that the policy behaves well in the sense that πθ(a|φ(s)) π (a|φ(s)), where π is the optimal policy for the MDP, as defined in Section 2. In IL, a dataset of N state observations and corresponding optimal actions φ(si), ai π (φ(si)) i=1,...,N is generated by an expert. Learning a policy then becomes an instance of supervised learning [24, 9]. In RL, the optimal action is not available, but instead, the agent can act in the world and observe the rewards and state transitions its actions effect. RL algorithms such as in [27, 21, 26, 18] use these observations to improve the value of the policy. 3 The Value Iteration Network Model In this section we introduce a general policy representation that embeds an explicit planning module. As stated earlier, the motivation for such a representation is that a natural solution to many tasks, such as the path planning described above, involves planning on some model of the domain. Let M denote the MDP of the domain for which we design our policy π. We assume that there is some unknown MDP M such that the optimal plan in M contains useful information about the optimal policy in the original task M. However, we emphasize that we do not assume to know M in advance. Our idea is to equip the policy with the ability to learn and solve M, and to add the solution of M as an element in the policy π. We hypothesize that this will lead to a policy that automatically learns a useful M to plan on. We denote by s S, a A, R( s, a), and P( s | s, a) the states, actions, rewards, and transitions in M. To facilitate a connection between M and M, we let R and P depend on the observation in M, namely, R = f R(φ(s)) and P = f P (φ(s)), and we will later learn the functions f R and f P as a part of the policy learning process. For example, in the grid-world domain described above, we can let M have the same state and action spaces as the true grid-world M. The reward function f R can map an image of the domain to a high reward at the goal, and negative reward near an obstacle, while f P can encode deterministic movements in the grid-world that do not depend on the observation. While these rewards and transitions are not necessarily the true rewards and transitions in the task, an optimal plan in M will still follow a trajectory that avoids obstacles and reaches the goal, similarly to the optimal plan in M. Once an MDP M has been specified, any standard planning algorithm can be used to obtain the value function V . In the next section, we shall show that using a particular implementation of VI for planning has the advantage of being differentiable, and simple to implement within a NN framework. In this section however, we focus on how to use the planning result V within the NN policy π. Our approach is based on two important observations. The first is that the vector of values V (s) s encodes all the information about the optimal plan in M. Thus, adding the vector V as additional features to the policy π is sufficient for extracting information about the optimal plan in M. However, an additional property of V is that the optimal decision π ( s) at a state s can depend only on a subset of the values of V , since π ( s) = arg max a R( s, a) + γ P s P( s | s, a) V ( s ). Therefore, if the MDP has a local connectivity structure, such as in the grid-world example above, the states for which P( s | s, a) > 0 is a small subset of S. In NN terminology, this is a form of attention [31], in the sense that for a given label prediction (action), only a subset of the input features (value function) is relevant. Attention is known to improve learning performance by reducing the effective number of network parameters during learning. Therefore, the second element in our network is an attention module that outputs a vector of (attention modulated) values ψ(s). Finally, the vector ψ(s) is added as additional features to a reactive policy πre(a|φ(s), ψ(s)). The full network architecture is depicted in Figure 2 (left). Returning to our grid-world example, at a particular state s, the reactive policy only needs to query the values of the states neighboring s in order to select the correct action. Thus, the attention module in this case could return a ψ(s) vector with a subset of V for these neighboring states. K recurrence Prev. Value Figure 2: Planning-based NN models. Left: a general policy representation that adds value function features from a planner to a reactive policy. Right: VI module a CNN representation of VI algorithm. Let θ denote all the parameters of the policy, namely, the parameters of f R, f P , and πre, and note that ψ(s) is in fact a function of φ(s). Therefore, the policy can be written in the form πθ(a|φ(s)), similarly to the standard policy form (cf. Section 2). If we could back-propagate through this function, then potentially we could train the policy using standard RL and IL algorithms, just like any other standard policy representation. While it is easy to design functions f R and f P that are differentiable (and we provide several examples in our experiments), back-propagating the gradient through the planning algorithm is not trivial. In the following, we propose a novel interpretation of an approximate VI algorithm as a particular form of a CNN. This allows us to conveniently treat the planning module as just another NN, and by back-propagating through it, we can train the whole policy end-to-end. 3.1 The VI Module We now introduce the VI module a NN that encodes a differentiable planning computation. Our starting point is the VI algorithm (1). Our main observation is that each iteration of VI may be seen as passing the previous value function Vn and reward function R through a convolution layer and max-pooling layer. In this analogy, each channel in the convolution layer corresponds to the Q-function for a specific action, and convolution kernel weights correspond to the discounted transition probabilities. Thus by recurrently applying a convolution layer K times, K iterations of VI are effectively performed. Following this idea, we propose the VI network module, as depicted in Figure 2B. The inputs to the VI module is a reward image R of dimensions l, m, n, where here, for the purpose of clarity, we follow the CNN formulation and explicitly assume that the state space S maps to a 2-dimensional grid. However, our approach can be extended to general discrete state spaces, for example, a graph, as we report in the Wiki Nav experiment in Section 4.4. The reward is fed into a convolutional layer Q with A channels and a linear activation function, Q a,i ,j = P l,i,j W a l,i,j Rl,i i,j j. Each channel in this layer corresponds to Q( s, a) for a particular action a. This layer is then max-pooled along the actions channel to produce the next-iteration value function layer V , Vi,j = max a Q( a, i, j). The next-iteration value function layer V is then stacked with the reward R, and fed back into the convolutional layer and max-pooling layer K times, to perform K iterations of value iteration. The VI module is simply a NN architecture that has the capability of performing an approximate VI computation. Nevertheless, representing VI in this form makes learning the MDP parameters and reward function natural by backpropagating through the network, similarly to a standard CNN. VI modules can also be composed hierarchically, by treating the value of one VI module as additional input to another VI module. We further report on this idea in the supplementary material. 3.2 Value Iteration Networks We now have all the ingredients for a differentiable planning-based policy, which we term a value iteration network (VIN). The VIN is based on the general planning-based policy defined above, with the VI module as the planning algorithm. In order to implement a VIN, one has to specify the state and action spaces for the planning module S and A, the reward and transition functions f R and f P , and the attention function; we refer to this as the VIN design. For some tasks, as we show in our experiments, it is relatively straightforward to select a suitable design, while other tasks may require more thought. However, we emphasize an important point: the reward, transitions, and attention can be defined by parametric functions, and trained with the whole policy2. Thus, a rough design can be specified, and then fine-tuned by end-to-end training. Once a VIN design is chosen, implementing the VIN is straightforward, as it is simply a form of a CNN. The networks in our experiments all required only several lines of Theano [28] code. In the next section, we evaluate VIN policies on various domains, showing that by learning to plan, they achieve a better generalization capability. 4 Experiments In this section we evaluate VINs as policy representations on various domains. Additional experiments investigating RL and hierarchical VINs, as well as technical implementation details are discussed in the supplementary material. Source code is available at https://github.com/avivt/VIN. Our goal in these experiments is to investigate the following questions: 1. Can VINs effectively learn a planning computation using standard RL and IL algorithms? 2. Does the planning computation learned by VINs make them better than reactive policies at generalizing to new domains? An additional goal is to point out several ideas for designing VINs for various tasks. While this is not an exhaustive list that fits all domains, we hope that it will motivate creative designs in future work. 4.1 Grid-World Domain Our first experiment domain is a synthetic grid-world with randomly placed obstacles, in which the observation includes the position of the agent, and also an image of the map of obstacles and goal position. Figure 3 shows two random instances of such a grid-world of size 16 16. We conjecture that by learning the optimal policy for several instances of this domain, a VIN policy would learn the planning computation required to solve a new, unseen, task. In such a simple domain, an optimal policy can easily be calculated using exact VI. Note, however, that here we are interested in evaluating whether a NN policy, trained using RL or IL, can learn to plan. In the following results, policies were trained using IL, by standard supervised learning from demonstrations of the optimal policy. In the supplementary material, we report additional RL experiments that show similar findings. We design a VIN for this task following the guidelines described above, where the planning MDP M is a grid-world, similar to the true MDP. The reward mapping f R is a CNN mapping the image input to a reward map in the grid-world. Thus, f R should potentially learn to discriminate between obstacles, non-obstacles and the goal, and assign a suitable reward to each. The transitions P were defined as 3 3 convolution kernels in the VI block, exploiting the fact that transitions in the grid-world are local3. The recurrence K was chosen in proportion to the grid-world size, to ensure that information can flow from the goal state to any other state. For the attention module, we chose a trivial approach that selects the Q values in the VI block for the current state, i.e., ψ(s) = Q(s, ). The final reactive policy is a fully connected network that maps ψ(s) to a probability over actions. We compare VINs to the following NN reactive policies: CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was trained to predict Q values, our network outputs a probability over actions. These terms are related, since π (s) = arg maxa Q(s, a). Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the first layer has a filter that spans the whole image, to properly convey information from the goal to every other state. In Table 1 we present the average 0 1 prediction loss of each model, evaluated on a held-out test-set of maps with random obstacles, goals, and initial states, for different problem sizes. In addition, for each map, a full trajectory from the initial state was predicted, by iteratively rolling-out the next-states 2VINs are fundamentally different than inverse RL methods [22], where transitions are required to be known. 3Note that the transitions defined this way do not depend on the state s. Interestingly, we shall see that the network learned to plan successful trajectories nevertheless, by appropriately shaping the reward. Figure 3: Grid-world domains (best viewed in color). A,B: Two random instances of the 28 28 synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between random start and goal positions. C: An image of the Mars domain, with points of elevation sharper than 10 colored in red. These points were calculated from a matching image of elevation data (not shown), and were not available to the learning algorithm. Note the difficulty of distinguishing between obstacles and non-obstacles. D: The VIN-predicted (purple line with cross markers), and the shortest-path ground truth (blue line) trajectories between between random start and goal positions. Domain VIN CNN FCN Prediction Success Traj. Pred. Succ. Traj. Pred. Succ. Traj. loss rate diff. loss rate diff. loss rate diff. 8 8 0.004 99.6% 0.001 0.02 97.9% 0.006 0.01 97.3% 0.004 16 16 0.05 99.3% 0.089 0.10 87.6% 0.06 0.07 88.3% 0.05 28 28 0.11 97% 0.086 0.13 74.2% 0.078 0.09 76.6% 0.08 Table 1: Performance on grid-world domain. Top: comparison with reactive policies. For all domain sizes, VIN networks significantly outperform standard reactive networks. Note that the performance gap increases dramatically with problem size. predicted by the network. A trajectory was said to succeed if it reached the goal without hitting obstacles. For each trajectory that succeeded, we also measured its difference in length from the optimal trajectory. The average difference and the average success rate are reported in Table 1. Clearly, VIN policies generalize to domains outside the training set. A visualization of the reward mapping f R (see supplementary material) shows that it is negative at obstacles, positive at the goal, and a small negative constant otherwise. The resulting value function has a gradient pointing towards a direction to the goal around obstacles, thus a useful planning computation was learned. VINs also significantly outperform the reactive networks, and the performance gap increases dramatically with the problem size. Importantly, note that the prediction loss for the reactive policies is comparable to the VINs, although their success rate is significantly worse. This shows that this is not a standard case of overfitting/underfitting of the reactive policies. Rather, VIN policies, by their VI structure, focus prediction errors on less important parts of the trajectory, while reactive policies do not make this distinction, and learn the easily predictable parts of the trajectory yet fail on the complete task. The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, reported in the supplementary material, show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data. 4.2 Mars Rover Navigation In this experiment we show that VINs can learn to plan from natural image input. We demonstrate this on path-planning from overhead terrain images of a Mars landscape. Each domain is represented by a 128 128 image patch, on which we defined a 16 16 grid-world, where each state was considered an obstacle if the terrain in its corresponding 8 8 image patch contained an elevation angle of 10 degrees or more, evaluated using an external elevation data base. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping f R for processing the image. The policy was trained to predict the shortest-path directly from the terrain image. We emphasize that the elevation data is not part of the input, and must be inferred (if needed) from the terrain image. After training, VIN achieved a success rate of 84.8%. To put this rate in context, we compare with the best performance achievable without access to the elevation data, which is 90.3%. To make this comparison, we trained a CNN to classify whether an 8 8 patch is an obstacle or not. This classifier was trained using the same image data as the VIN network, but its labels were the true obstacle classifications from the elevation map (we reiterate that the VIN did not have access to these ground-truth obstacle labels during training or testing). The success rate of planner that uses the obstacle map generated by this classifier from the raw image is 90.3%, showing that obstacle identification from the raw image is indeed challenging. Thus, the success rate of the VIN, which was trained without any obstacle labels, and had to figure out the planning process is quite remarkable. 4.3 Continuous Control Network Train Error Test Error VIN 0.30 0.35 CNN 0.39 0.59 Figure 4: Continuous control domain. Top: average distance to goal on training and test domains for VIN and CNN policies. Bottom: trajectories predicted by VIN and CNN on test domains. We now consider a 2D path planning domain with continuous states and continuous actions, which cannot be solved using VI, and therefore a VIN cannot be naively applied. Instead, we will construct the VIN to perform high-level planning on a discrete, coarse, grid-world representation of the continuous domain. We shall show that a VIN can learn to plan such a highlevel plan, and also exploit that plan within its low-level continuous control policy. Moreover, the VIN policy results in better generalization than a reactive policy. Consider the domain in Figure 4. A red-colored particle needs to be navigated to a green goal using horizontal and vertical forces. Gray-colored obstacles are randomly positioned in the domain, and apply an elastic force and friction when contacted. This domain presents a non-trivial control problem, as the agent needs to both plan a feasible trajectory between the obstacles (or use them to bounce off), but also control the particle (which has mass and inertia) to follow it. The state observation consists of the particle s continuous position and velocity, and a static 16 16 downscaled image of the obstacles and goal position in the domain. In principle, such an observation is sufficient to devise a rough plan for the particle to follow. As in our previous experiments, we investigate whether a policy trained on several instances of this domain with different start state, goal, and obstacle positions, would generalize to an unseen domain. For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [29] for physical simulation. We generated 200 random training instances, and evaluate our performance on 40 different test instances from the same distribution. Our VIN design is similar to the grid-world cases, with some important modifications: the attention module selects a 5 5 patch of the value V , centered around the current (discretized) position in the map. The final reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map f R is not yet learned. We compare with a CNN-based reactive policy inspired by the state-of-the-art results in [21, 20], with 2 CNN layers for image processing, followed by a 3-layer fully connected network similar to the VIN reactive policy. Figure 4 shows the performance of the trained policies, measured as the final distance to the target. The VIN clearly outperforms the CNN on test domains. We also plot several trajectories of both policies on test domains, showing that VIN learned a more sensible generalization of the task. 4.4 Web Nav Challenge In the previous experiments, the planning aspect of the task corresponded to 2D navigation. We now consider a more general domain: Web Nav [23] a language based search task on a graph. In Web Nav [23], the agent needs to navigate the links of a website towards a goal web-page, specified by a short 4-sentence query. At each state s (web-page), the agent can observe average wordembedding features of the state φ(s) and possible next states φ(s ) (linked pages), and the features of the query φ(q), and based on that has to select which link to follow. In [23], the search was performed on the Wikipedia website. Here, we report experiments on the Wikipedia for Schools website, a simplified Wikipedia designed for children, with over 6000 pages and at most 292 links per page. In [23], a NN-based policy was proposed, which first learns a NN mapping from (φ(s), φ(q)) to a hidden state vector h. The action is then selected according to π(s |φ(s), φ(q)) exp h φ(s ) . In essence, this policy is reactive, and relies on the word embedding features at each state to contain meaningful information about the path to the goal. Indeed, this property naturally holds for an encyclopedic website that is structured as a tree of categories, sub-categories, sub-sub-categories, etc. We sought to explore whether planning, based on a VIN, can lead to better performance in this task, with the intuition that a plan on a simplified model of the website can help guide the reactive policy in difficult queries. Therefore, we designed a VIN that plans on a small subset of the graph that contains only the 1st and 2nd level categories (< 3% of the graph), and their word-embedding features. Designing this VIN requires a different approach from the grid-world VINs described earlier, where the most challenging aspect is to define a meaningful mapping between nodes in the true graph and nodes in the smaller VIN graph. For the reward mapping f R, we chose a weighted similarity measure between the query features φ(q), and the features of nodes in the small graph φ( s). Thus, intuitively, nodes that are similar to the query should have high reward. The transitions were fixed based on the graph connectivity of the smaller VIN graph, which is known, though different from the true graph. The attention module was also based on a weighted similarity measure between the features of the possible next states φ(s ) and the features of each node in the simplified graph φ( s). The reactive policy part of the VIN was similar to the policy of [23] described above. Note that by training such a VIN end-to-end, we are effectively learning how to exploit the small graph for doing better planning on the true, large graph. Both the VIN policy and the baseline reactive policy were trained by supervised learning, on random trajectories that start from the root node of the graph. Similarly to [23], a policy is said to succeed a query if all the correct predictions along the path are within its top-4 predictions. After training, the VIN policy performed mildly better than the baseline on 2000 held-out test queries when starting from the root node, achieving 1030 successful runs vs. 1025 for the baseline. However, when we tested the policies on a harder task of starting from a random position in the graph, VINs significantly outperformed the baseline, achieving 346 successful runs vs. 304 for the baseline, out of 4000 test queries. These results confirm that indeed, when navigating a tree of categories from the root up, the features at each state contain meaningful information about the path to the goal, making a reactive policy sufficient. However, when starting the navigation from a different state, a reactive policy may fail to understand that it needs to first go back to the root and switch to a different branch in the tree. Our results indicate such a strategy can be better represented by a VIN. We remark that there is still room for further improvements of the Web Nav results, e.g., by better models for reward and attention functions, and better word-embedding representations of text. 5 Conclusion and Outlook The introduction of powerful and scalable RL methods has opened up a range of new problems for deep learning. However, few recent works investigate policy architectures that are specifically tailored for planning under uncertainty, and current RL theory and benchmarks rarely investigate the generalization properties of a trained policy [27, 21, 5]. This work takes a step in this direction, by exploring better generalizing policy representations. Our VIN policies learn an approximate planning computation relevant for solving the task, and we have shown that such a computation leads to better generalization in a diverse set of tasks, ranging from simple gridworlds that are amenable to value iteration, to continuous control, and even to navigation of Wikipedia links. In future work we intend to learn different planning computations, based on simulation [10], or optimal linear control [30], and combine them with reactive policies, to potentially develop RL solutions for task and motion planning [14]. Acknowledgments This research was funded in part by Siemens, by ONR through a PECASE award, by the Army Research Office through the MAST program, and by an NSF CAREER grant. A. T. was partially funded by the Viterbi Scholarship, Technion. Y. W. was partially funded by a DARPA PPAML program, contract FA8750-14-C-0011. [1] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [2] D. Bertsekas. Dynamic Programming and Optimal Control, Vol II. Athena Scientific, 4th edition, 2012. [3] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Computer Vision and Pattern Recognition, pages 3642 3649, 2012. [4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In ICML, 2011. [5] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. ar Xiv preprint ar Xiv:1604.06778, 2016. [6] C. Farabet, C. Couprie, L. Najman, and Y. Le Cun. Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915 1929, 2013. [7] C. Finn, M. Zhang, J. Fu, X. Tan, Z. Mc Carthy, E. Scharff, and S. Levine. Guided policy search code implementation, 2016. Software available from rll.berkeley.edu/gps. [8] K. Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in positionneocognitron. Transactions of the IECE, J62-A(10):658 665, 1979. [9] A. Giusti et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 2016. [10] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In NIPS, 2014. [11] X. Guo, S. Singh, R. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree search in atari games. ar Xiv:1604.07095, 2016. [12] R. Ilin, R. Kozma, and P. J. Werbos. Efficient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In ADPRL, 2007. [13] J. Joseph, A. Geramifard, J. W. Roberts, J. P. How, and N. Roy. Reinforcement learning with misspecified model classes. In ICRA, 2013. [14] L. P. Kaelbling and T. Lozano-Pérez. Hierarchical task and motion planning in the now. In IEEE International Conference on Robotics and Automation (ICRA), pages 1470 1477, 2011. [15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [16] Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. [17] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014. [18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 17, 2016. [19] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3431 3440, 2015. [20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ar Xiv preprint ar Xiv:1602.01783, 2016. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529 533, 2015. [22] G. Neu and C. Szepesvári. Apprenticeship learning using inverse reinforcement learning and gradient methods. In UAI, 2007. [23] R. Nogueira and K. Cho. Webnav: A new large-scale task for natural language based sequential decision making. ar Xiv preprint ar Xiv:1602.02261, 2016. [24] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011. [25] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In International Joint Conference on Neural Networks. IEEE, 1990. [26] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015. [27] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 1998. [28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. ar Xiv e-prints, abs/1605.02688, May 2016. [29] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026 5033. IEEE, 2012. [30] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, 2015. [31] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.