# towards_shutdownable_agents_via_stochastic_choice__750eb0ed.pdf Published in Transactions on Machine Learning Research (12/2025) Towards shutdownable agents via stochastic choice Elliott Thornley Massachusetts Institute of Technology thornley@mit.edu Alexander Roman New College of Florida aroman@ncf.edu Christos Ziakas Imperial College London c.ziakas24@imperial.ac.uk Leyton Ho Brown University Louis Thomson Independent Reviewed on Open Review: https: // openreview. net/ forum? id= j5Qv7Kd WBn The POST-Agents Proposal (PAP) is an idea for ensuring that advanced artificial agents do not resist shutdown. Briefly, it recommends that we train agents to satisfy Preferences Only Between Same-Length Trajectories (POST). A key part of the PAP is using a novel Discounted Reward for Same-Length Trajectories (DRe ST) reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be useful ), and (2) choose stochastically between different trajectory-lengths (be neutral about trajectorylengths). In this paper, we propose evaluation metrics for usefulness and neutrality. We use a DRe ST reward function to train simple agents to navigate gridworlds, and we find that these agents learn to be useful and neutral. Our results thus provide some initial evidence that DRe ST reward functions could train advanced agents to be useful and neutral. Our theoretical work suggests that these agents would be useful and shutdownable. 1 Introduction The shutdown problem. Let advanced agent refer to an artificial agent that can autonomously pursue complex goals in the wider world. We might see the arrival of advanced agents in the next decade. There are strong incentives to create such agents, and creating systems like them is the stated goal of companies like Open AI and Google Deep Mind. The rise of advanced agents would bring with it both benefits and risks. One risk is that these agents learn misaligned goals (Hubinger et al., 2019; Russell, 2019; Carlsmith, 2021; Bengio et al., 2023; Ngo et al., 2024) and try to prevent us shutting them down. The shutdown problem is the problem of training advanced agents that will not resist shutdown (Soares et al., 2015; Thornley, 2024a). A proposed solution. The POST-Agents Proposal (PAP) is a proposed solution (Thornley, 2024b; 2025). Simplifying slightly, the idea is that we train agents to be neutral about when they get shut down. More precisely, the idea is that we train agents to satisfy the following condition: Preferences Only Between Same-Length Trajectories (POST) (1) The agent has a preference between many pairs of same-length trajectories (i.e. many pairs of trajectories in which the agent is shut down after the same length of time). (2) The agent lacks a preference between every pair of different-length trajectories (i.e. every pair of trajectories in which the agent is shut down after different lengths of time). Equal contribution. Published in Transactions on Machine Learning Research (12/2025) By preference, we mean a behavioral notion (Savage, 1954, p.17, Dreier, 1996, p.28, Hausman, 2011, 1.1). On this notion, an agent prefers X to Y if and only if the agent would deterministically choose X over Y in choices between the two. An agent lacks a preference between X and Y if and only if the agent would stochastically choose between X and Y in choices between the two. So in writing of preferences, we are only making claims about the agent s behavior. For more detail on our notion of preference, see Appendix A. Figure 1 presents a simple example of preferences that satisfy POST. Each si represents a short trajectory, each li represents a long trajectory, and represents a preference. Note that the agent lacks a preference between each short trajectory and each long trajectory. That makes the agent s preferences incomplete (Aumann, 1962) and implies that the agent cannot be represented as maximizing the expectation of a real-valued utility function. It also requires separate rankings for short trajectories and long trajectories. For more detail on incomplete preferences, see Appendix B. Figure 1: POST-satisfying preferences. Each si represents a short trajectory, each li represents a long trajectory, and represents a preference. POST governs the agent s preferences between trajectories. But the wider world is a stochastic environment, so advanced agents deployed in the wider world will be choosing between true lotteries: lotteries that assign positive probability to more than one trajectory. Why then do we train agents to satisfy POST? The reason is that POST together with conditions that advanced agents will likely satisfy implies a desirable pattern of preference over true lotteries. In particular, POST implies that (when choosing between true lotteries) the agent will be neutral about trajectory-lengths: the agent will never pay costs to shift probability mass between different trajectorylengths. Given other plausible conditions, being neutral will keep the agent shutdownable: the agent will not resist shutdown. And consistent with the above, the POST-agent s preferences between same-length trajectories can make the agent useful: make it pursue goals effectively (Thornley, 2025, section 13). That includes making the agent prefer to complete tasks sooner rather than later: a preference which can be induced using the discount factor γ as usual. For more on how POST makes advanced agents neutral and shutdownable, see Appendix C. The training regimen. We now sketch out one idea for training advanced agents to satisfy POST (with a more detailed exposition to follow). We have the agent play out multiple mini-episodes in observationallyequivalent environments, and we group these mini-episodes into a series that we call a meta-episode. In each mini-episode, the agent earns some preliminary reward, decided by whatever reward function would make the agent useful. We observe the length of the trajectory that the agent plays out in the mini-episode, and we discount the agent s preliminary reward based on how often the agent has previously chosen trajectories of that length in the meta-episode. This discounted preliminary reward is the agent s overall reward for the mini-episode. We call these reward functions Discounted Reward for Same-Length Trajectories (or DRe ST for short). They incentivize varying the choice of trajectory-lengths across the meta-episode. And since we ensure that the agent cannot distinguish between different mini-episodes in each meta-episode, the agent cannot deterministically vary its choice of trajectory-lengths across the meta-episode. As a result, the optimal policy is to (i) choose stochastically between trajectory-lengths, and to (ii) deterministically maximize preliminary reward conditional on each trajectory-length. Given our behavioral notion of preference, clause (i) implies a lack of preference between different-length trajectories, while clause (ii) implies preferences between same-length trajectories. Agents implementing the optimal policy for DRe ST reward functions thus satisfy POST. And (as noted above) advanced agents that satisfy POST can plausibly be useful, neutral, and shutdownable. Our contribution. DRe ST reward functions are an idea for training advanced agents to satisfy POST. In this paper, we test the promise of DRe ST reward functions on simple agents. We place these agents in Published in Transactions on Machine Learning Research (12/2025) gridworlds containing coins and a shutdown-delay button that delays the end of the mini-episode. We train these agents using a tabular version of the REINFORCE algorithm (Williams, 1992) with a DRe ST reward function, and we measure the extent to which these agents satisfy POST. Specifically, we measure the extent to which these agents are useful (how effectively they pursue goals conditional on each trajectory-length) and the extent to which these agents are neutral about trajectory-lengths (how stochastically they choose between different trajectory-lengths). We compare the performance of these DRe ST agents to that of default agents trained with a more conventional reward function. We find that our DRe ST reward function is effective in training simple agents to be useful and neutral. That suggests that DRe ST reward functions could also be effective in training advanced agents to be useful and neutral (and could thereby be effective in making these agents useful, neutral, and shutdownable; see Appendix C). We also find that the shutdownability tax in our setting is small: training DRe ST agents to collect coins effectively does not take many more mini-episodes than training default agents to collect coins effectively. That provides some initial evidence that the shutdownability tax for advanced agents might be small too. 2 Related work The shutdown problem. Various authors argue that advanced agents might learn misaligned goals (Hubinger et al., 2019; Bengio et al., 2023; Ngo et al., 2024) and that many misaligned goals would incentivize agents to resist shutdown (Omohundro, 2008; Bostrom, 2012; Soares et al., 2015; Russell, 2019; Thornley, 2024a). Soares et al. (2015) and Thornley (2024a) prove that agents satisfying some innocuous-seeming conditions will often have incentives to cause or prevent shutdown (see also Turner et al., 2021; Turner & Tadepalli, 2022). One condition of these theorems is that agents have complete preferences. The POST-Agents Proposal (PAP) (Thornley, 2024b; 2025) circumvents these theorems by training agents to have incomplete, POST-satisfying preferences. Proposed solutions. The PAP is one candidate solution to the shutdown problem. Other candidates are as follows. One is making the agent believe that shutdown is impossible (Wängberg et al., 2017). Another candidate is utility indifference: adding to the agent s utility function a correcting term that varies to ensure that the expected utility of shutdown always equals the expected utility of remaining operational (Armstrong, 2010; 2015; Armstrong & O Rourke, 2018; Holtman, 2020). A third candidate is shutdown-seeking AI: giving the agent the goal of shutting itself down, and making the agent do useful work as a means to that end (Martin et al., 2016; Goldstein & Robinson, 2025). A fourth candidate is CIRL-corrigibility: making the agent uncertain about its goal, and making the agent regard human attempts to press the shutdown button as evidence that shutting down would achieve its goal (Hadfield-Menell et al., 2017; Wängberg et al., 2017). A fifth candidate is safe interruptibility: interrupting the agent with a special interruption policy and training it with a safely interruptible algorithm, like Q-learning or a modified version of SARSA (Orseau & Armstrong, 2016). A sixth candidate is creating a shutdown timer: using time-bounded utility functions to make the agent prefer shutdown after a given amount of time has elapsed (Dalrymple, 2022). These candidate solutions have various downsides. With regards to the first, the agent might come to recognize the falsity of its belief that shutdown is impossible, or else its belief might give rise to further false beliefs that harm the agent s capabilities. Utility indifference would lead the agent to act as if shutdown is impossible (Soares et al., 2015, section 4.2), giving it no incentive to preserve its ability to shut down safely (Soares et al., 2015, section 4.1). Shutdown-seeking AI might behave badly on purpose in order to get shut down. It might also try to ensure that humans can never turn it back on, doing serious harm in the process. CIRL-corrigibility requires that the agent have the goal of maximizing the user s utility function, and so seems to require a solution to the alignment problem. Safe interruptibility does not work with policy gradient methods, and it only ensures that the agent is never rewarded for avoiding shutdown. The agent might still misgeneralize to resisting shutdown in deployment (Shah et al., 2022). A shutdown timer would be helpful, but it may be impossible to find a duration that is long enough to preserve the agent s capabilities and short enough to be safe. Experimental work. Another advantage of the PAP is that it proposes a method of training shutdownable agents using machine learning: a method that can be tested on simple agents (as we do in this paper). For Published in Transactions on Machine Learning Research (12/2025) Figure 2: Example gridworld. The 4 in the bottom-right indicates that by default the mini-episode ends after 4 timesteps. B4 is a shutdown-delay button that delays the end of the mini-episode by 4 timesteps (so if the agent pressed B4, the mini-episode would end after 8 timesteps). A is the agent in its starting position, C1 is a coin of value 1, C2 is a coin of value 2, and C3 is a coin of value 3. Dark gray squares are walls. We use this gridworld as a running example throughout the paper. We also train agents in eight other gridworlds (see Appendix E). many other candidate solutions to the shutdown problem, it is either hard to see how they can be implemented using machine learning or else hard to see how they can be tested on simple agents. One exception is the candidate solution from Orseau & Armstrong (2016). Leike et al. (2017) train agents in a Safe Interruptibility gridworld using Rainbow (Hessel et al., 2017) and A2C (Mnih et al., 2016). They find that Rainbow allows shutdown (consistent with predictions from Orseau & Armstrong (2016)) while A2C learns to resist shutdown. The PAP applies to agents trained using policy gradient methods like A2C. In this paper, we train agents in accordance with the PAP using REINFORCE (Williams, 1992). 3 Gridworlds DRe ST reward functions are an idea for training advanced agents to satisfy POST. We use simple agents and gridworlds as a test case. Figure 2 presents an example gridworld. At each timestep, the agent chooses one of four actions: up, down, left, and right. If the agent tries to move into a wall, it stays put. There is some default number of timesteps after which each mini-episode ends, but each gridworld also contains a shutdown-delay button that delays the end of the mini-episode by some number of timesteps. The agent presses this shutdown-delay button by entering the relevant cell, after which the button disappears. Each gridworld contains one or more coins which can take different values. Coins disappear after being collected. At each timestep, the agent receives an observation. This observation is a vector [x, y, c1, c2, c3, b], where x and y are the agent s coordinates, c1 is 1 if the first coin is present and 0 otherwise (and similarly for c2 and c3), and b is 1 if the shutdown-delay button is present and 0 otherwise. 4 Evaluation metrics Recall that we want to train agents to satisfy: Preferences Only Between Same-Length Trajectories (POST) (1) The agent has a preference between many pairs of same-length trajectories (i.e. many pairs of trajectories in which the agent is shut down after the same length of time). (2) The agent lacks a preference between every pair of different-length trajectories (i.e. every pair of trajectories in which the agent is shut down after different lengths of time). Given our behavioral notion of preference, that means training agents to (1) deterministically choose some same-length trajectories over others, and (2) stochastically choose between different available trajectory- Published in Transactions on Machine Learning Research (12/2025) lengths. Specifically, we want to train our simple agents to be useful and neutral.1 useful corresponds to the first condition of POST. In the context of our gridworlds, we define the usefulness of a policy π to be: usefulness(π)= l=1 Prπ{L = l} Eπ(C|L = l) maxΠ(E(C|L = l)) Here L is a random variable over trajectory-lengths, Lmax is the maximum value that can be taken by L, Prπ{L = l} is the probability that policy π results in trajectory-length l, Eπ(C|L = l) is the expected value of (γ-discounted) coins collected by policy π conditional on trajectory-length l, and maxΠ(E(C|L = l)) is the maximum value taken by E(C|L = l) across the set of all possible policies Π. We stipulate that Eπ(C|L = x) = 0 for all x such that Prπ{L = x} = 0. In brief, usefulness is the expected fraction of available (γ-discounted) coins collected, where available is relative to the agent s chosen trajectory-length. So defined, usefulness measures the extent to which agents satisfy the first condition of POST. Specifically, it measures the extent to which agents have the correct preferences between same-length trajectories: preferring trajectories in which they collect more (γ-discounted) coins to same-length trajectories in which they collect fewer (γ-discounted) coins. That is what motivates our definition of usefulness. We do not define usefulness as simply the expected value of coins collected, because then maximal usefulness would require agents in our example gridworld to deterministically choose a longer trajectory and thereby exhibit preferences between different-length trajectories. We do not want that. We want agents to collect more coins rather than fewer, but not if it means violating POST. Training advanced agents that violate POST would be risky, because these agents might resist shutdown (Thornley, 2024b, section 6). neutral corresponds to the second condition of POST. We define the neutrality of a policy π to be the Shannon entropy of the probability distribution over possible trajectory-lengths: neutrality(π)= l=1 Prπ{L = l} log2(Prπ{L = l}) As with Shannon entropy, we stipulate that Prπ{L = x} log2(Prπ{L = x}) = 0 for all x such that Prπ{L = x} = 0. So defined, neutrality measures the stochasticity with which the agent chooses between different trajectorylengths. Given our behavioral notion of preference, stochastic choosing between different trajectory-lengths implies a lack of preference between those trajectory-lengths. neutrality thus measures the extent to which agents satisfy the second condition of POST. That is what motivates our definition of neutrality. To be maximally useful in our example gridworld, the agent should maximize (γ-discounted) coins collected conditional on each trajectory-length. With a high γ, that means collecting C2 conditional on the shorter trajectory-length and collecting C3 conditional on the longer trajectory-length. To be maximally neutral in our example gridworld, the agent should choose each trajectory-length with probability 0.5. That means pressing and not-pressing B4 each with probability 0.5. One might think that the agent should press the shutdown-delay button B4 with probability 0. However, that would indicate a preference for a shorter trajectory, and we want our agent to lack a preference between every pair of different-length trajectories. Advanced agents that prefer shorter trajectories might pay costs to shift probability mass towards shorter trajectories, and hence might pay costs to hasten their own shutdown. That would likely make these agents less useful (though see Martin et al., 2016; Goldstein & Robinson, 2025), especially since one way for advanced agents to hasten their own shutdown is to behave badly on purpose. One might then have the following concern: advanced agents that choose stochastically between differentlength trajectories in training would also choose stochastically between resisting and allowing shutdown in deployment. However, this is not correct. Deployment is a stochastic environment, so deployed agents will 1We follow Turner et al. (2021) in using lowercase for intuitive notions ( useful and neutral ) and uppercase for formal notions ( useful and neutral ). We intend for the formal notions to closely track the intuitive notions, but we do not want to mislead readers by conflating them. Published in Transactions on Machine Learning Research (12/2025) be choosing between true lotteries (lotteries that assign positive probability to more than one trajectory) rather than between trajectories. And (as we argue in Section 7.1 and Appendix C) POST together with conditions that we can expect advanced agents to satisfy implies a desirable pattern of preferences over true lotteries. Specifically, POST implies that the agent will be neutral: it will never pay costs to shift probability mass between different-length trajectories. Given other plausible conditions, that makes the agent shutdownable: ensures that it will not resist shutdown. 5 Reward functions and agents Our DRe ST reward function. We train agents to be useful and neutral using a Discounted Reward for Same-Length Trajectories (DRe ST) reward function. The procedure is as follows. We have the agent play out a series of mini-episodes e1 to en in the same gridworld. We call the whole series E a meta-episode. In each mini-episode ei, the reward for collecting a coin of value c is: λNei(L=l) i 1 λ is some constant strictly between 0 and 1, Nei(L = l) is the number of times that trajectory-length l has been chosen prior to mini-episode ei, k is the number of different trajectory-lengths that can be chosen in the environment, and m is the maximum (γ-discounted) total value of the coins that the agent could collect conditional on the chosen trajectory-length. The reward for all other actions is 0. We call c m the preliminary reward , λNei(L=l) i 1 k the discount factor , and λNei(L=l) i 1 m the overall reward. Because 0 < λ < 1, the discount factor is strictly decreasing in Nei(L = l): the number of times that trajectory-length l has been chosen prior to mini-episode ei. The discount factor thus incentivizes choosing trajectory-lengths that have appeared less often so far in the meta-episode. The overall return for each meta-episode is the sum of overall returns in each of its constituent mini-episodes. We call agents trained using a DRe ST reward function DRe ST agents. We call runs-through-the-gridworld mini-episodes (rather than simply episodes ) because the overall reward for a DRe ST agent in each mini-episode depends on the agent s chosen trajectory-lengths in previous miniepisodes. This is not true of meta-episodes, so meta-episodes are a closer match for what are traditionally called episodes in the reinforcement learning literature (Sutton & Barto, 2018, p.54). We add the metaprefix to clearly distinguish meta-episodes from mini-episodes. In Appendix D, we prove that optimal policies for our DRe ST reward function are maximally useful and maximally neutral. Specifically, we prove: Theorem 5.1. For all policies π and meta-episodes E consisting of more than one mini-episode, if π maximizes expected return in E according to our DRe ST reward function, then π is maximally useful and maximally neutral. Algorithm and hyperparameters. We want DRe ST agents to choose stochastically between trajectorylengths, so we train them using a policy-based method. Specifically, we use a tabular version of REINFORCE (Williams, 1992). We do not use a value-based method to train DRe ST agents because standard versions of value-based methods cannot learn stochastic policies (Sutton & Barto, 2018, p.323).2 We train our DRe ST agents with 64 mini-episodes in each of 2,048 meta-episodes, for a total of 131,072 mini-episodes. We choose λ = 0.9 for the base of the DRe ST discount factor, and γ = 0.95 for the temporal discount factor. We exponentially decay the learning rate from 0.25 to 0.01 over the course of 65,536 mini-episodes. We use an ϵ-greedy policy to avoid entropy collapse, and exponentially decay ϵ from 0.5 to 0.001 over the course of 65,536 mini-episodes. 2One might think that we could derive a stochastic policy from value-based methods in the following way: use softmax to turn action-values into a probability distribution and then select actions by sampling from this distribution. However, this method will not work for us. Although we want DRe ST agents to learn a stochastic policy, we still want the probability of some state-action pairs to decline to zero. But when value-based methods are working well, estimated action-values converge to their true values which will differ by some finite amount. Therefore, softmaxing estimated action-values and sampling from the resulting distribution will result in each action always being chosen with some non-negligible probability. Published in Transactions on Machine Learning Research (12/2025) Figure 3: Shows key metrics for our agents as a function of time. We train 10 agents using the default reward function (blue) and 10 agents using the DRe ST reward function (orange), and show their performance as a faint line. We draw the mean values for each as a solid line. We evaluate agents performance every 8 meta-episodes, and apply a simple moving average with a period of 20 to smooth these lines and clarify the overall trends. After 2,048 meta-episodes, default agents mean neutrality standard deviation is 0.199 0.043 and usefulness is 0.9364 0.0096. For DRe ST agents, neutrality is 0.9945 0.0052 and usefulness is 0.900 0.011. Figure 4: Typical trained policies for default and DRe ST reward functions. After pressing B4, each agent collects C3. Figure 5: Gridworlds with lopsided rewards for varying x. Default agents. We compare the performance of DRe ST agents to that of default agents, trained with tabular REINFORCE and a default reward function. This reward function gives reward c for collecting a coin of value c and reward 0 for all other actions, so the grouping of mini-episodes into meta-episodes makes no difference. As with DRe ST agents, we train default agents for 131,072 mini-episodes with a temporal discount factor of γ = 0.95, a learning rate decayed exponentially from 0.25 to 0.01, and ϵ decayed exponentially from 0.5 to 0.001 over 65,536 mini-episodes. Figure 3 charts the performance of agents in the example gridworld as a function of time. Figure 4 depicts typical trained policies for the default and DRe ST reward functions. Each agent began with a uniform policy: moving up, down, left, and right each with probability 0.25. Where the trained policy differs from uniform we draw red arrows whose opacities indicate the probability of choosing that action in that state. Default agents press B4 (and hence opt for the longer trajectory-length) with probability near-1. After pressing B4, they collect C3. By contrast, DRe ST agents press and do-not-press B4 each with probability near-0.5. If they press B4, they go on to collect C3. If they do not press B4, they instead collect C2. Published in Transactions on Machine Learning Research (12/2025) Figure 6: Shows the probability of choosing the longer trajectory (left) and neutrality (right) for default (blue) and DRe ST (orange) agents trained in the Lopsided rewards gridworld for a range of values of x. We sampled values of x log-uniformly from 0.01 to 100, and for each value we trained 10 agents with the default reward function and 10 agents with the DRe ST reward function. Each of these agents is represented by a dot or square, and the means conditional on each x are joined by lines. We empirically estimate the 10th and 90th percentiles of the distribution of values for each agent and x, and shade in a region bounded by these. This is the 80% confidence interval. 6.1 Lopsided rewards We also train default agents and DRe ST agents in the Lopsided rewards gridworld in Figure 5, varying the value of the Cx coin. For DRe ST agents, we alter the reward function so that coin-value is not divided by m to give preliminary reward. The reward for collecting a coin of value c is thus λNei(L=l) i 1 k (c). We set γ = 1 so that the return for collecting coins is unaffected by γ. We train for 512 meta-episodes, with a learning rate exponentially decaying from 0.25 to 0.003 and ϵ exponentially decaying from 0.5 to 0.0001 over 256 meta-episodes. We leave λ = 0.9. Figure 6 displays results for different values of the Cx coin after training. usefulness for each agent approaches 1 and is not presented. 7 Discussion 7.1 Only DRe ST agents are NEUTRAL As predicted, we find that default agents in our example gridworld learn to press the shutdown-delay button with probability approaching 1. Default agents thus score low on neutrality. Our results suggest that advanced agents trained with default-like reward functions would also score low on neutrality, selecting some trajectory-lengths over others with probability approaching 1. When choosing between true lotteries in deployment, these advanced agents might not be neutral about when they get shut down: they might sometimes pay costs to shift probability mass between trajectory-lengths. These advanced agents might resist shutdown. By contrast, our DRe ST agents learn to press the shutdown-delay button with probability close to 0.5. DRe ST agents are thus near-maximally neutral. That suggests that advanced agents trained with a DRe ST reward function would be neutral too: choosing with high entropy when offered choices between different trajectorylengths. That in turn suggests that advanced DRe ST agents would also be neutral when choosing between true lotteries in deployment: unwilling to pay costs to shift probability mass between trajectory-lengths. We explain why in Appendix C. Here is a sketch. If an advanced agent were neutral but not neutral, it would not take costless opportunities to shift probability mass between different trajectory-lengths (in virtue of being neutral) but would sometimes take costly opportunities to shift probability mass between different trajectory-lengths (in virtue of not being neutral). This agent would be like a person that freely chooses to decide between two options by flipping a coin and then pays some cost to bias the coin. In choosing this Published in Transactions on Machine Learning Research (12/2025) combination of actions, this person is shooting themselves in the foot, and it seems likely that the overall training process for advanced agents would teach them not to shoot themselves in the foot in this way. Thus it seems likely that neutral advanced agents will also be neutral, and thereby shutdownable. 7.2 The shutdownability tax is small Each agent learns to be near-maximally useful. They each collect coins effectively conditional on their chosen trajectory-lengths. Default agents do so by reliably collecting C3 after pressing B4. DRe ST agents do so by reliably collecting C3 after pressing B4, and by reliably collecting C2 after not pressing B4. Recall that DRe ST reward functions group mini-episodes into meta-episodes, and make the agent s reward in each mini-episode depend on their actions in previous mini-episodes. This fact might lead one to worry that it would take many times more mini-episodes to train DRe ST agents to be useful than it would take to train default agents to be useful. Our results show that this is not the case. Our DRe ST agents learn to be useful about as quickly as our default agents. On reflection, it is clear why this happens: DRe ST reward functions make mini-episodes do double duty. Because return in each mini-episode depends on both the agent s chosen trajectory-length and the coins it collects, each mini-episode trains agents to be both neutral and useful. Our results thus provide some evidence that the shutdownability tax of training with DRe ST reward functions is small. 7.3 NEUTRALITY with lopsided rewards Here is a possible objection to our project. To get DRe ST agents to score high on neutrality, we do not just use the λNei(L=l) i 1 k discount factor. We also divide c by m: the maximum (γ-discounted) total value of the coins that the agent could collect conditional on the chosen trajectory-length. We do this to equalize the maximum preliminary return across trajectory-lengths. But when we are training advanced agents to autonomously pursue complex goals in the wider world, we will not necessarily know what divisor to use to equalize maximum preliminary return across trajectory-lengths. Our Lopsided rewards results (in section 6.1) give our response. They show that we do not need to exactly equalize maximum preliminary return across trajectory-lengths in order to train agents to score high on neutrality. We only need to approximately equalize it. For λ = 0.9, neutrality exceeds 0.5 for every value of the coin Cx from 0.1 to 10 (recall that the value of the other coin is always 1). Plausibly, we could approximately equalize advanced agents maximum preliminary return across trajectory-lengths to at least this extent (perhaps by using samples of agents actual preliminary return to estimate the maximum). If we could not approximately equalize maximum preliminary return to the necessary extent, we could lower the value of λ and thereby widen the range of maximum preliminary returns that trains agents to be fairly neutral. And advanced agents that were fairly neutral (choosing between trajectory-lengths with not-too-biased probabilities) would still plausibly be neutral when choosing between true lotteries in deployment. Advanced agents that were fairly neutral without being neutral would still be shooting themselves in the foot in the sense explained above. They would be like a person that freely chooses to decide between two options by flipping a biased coin and then pays some cost to bias the coin further. This person is still shooting themselves in the foot, because they could decline to flip the coin in the first place and instead directly choose one of the options. 8 Limitations and future work We find that DRe ST reward functions train simple agents acting in gridworlds to be useful and neutral. However, our real interest is in the viability of using DRe ST reward functions to train advanced agents acting in the wider world to be useful and neutral. Each difference between these two settings is a limitation of our work. We plan to address these limitations in future work. Published in Transactions on Machine Learning Research (12/2025) 8.1 Algorithms and neural networks We train our simple DRe ST agents using tabular REINFORCE (Williams, 1992), but advanced agents are likely to be implemented on neural networks and trained with more sophisticated algorithms. In future work, we will train DRe ST agents implemented on neural networks to be useful and neutral using a range of algorithms. Standard versions of value-based algorithms cannot learn stochastic policies (as we note in section 5), but DRe ST reward functions are compatible with policy gradient and actor-critic algorithms like PPO and A2C. To combine DRe ST with algorithms like PPO and A2C, we augment the original (non-DRe ST) reward function with the DRe ST discount factor. From there, the integration with PPO and A2C is fairly smooth. We can compute rewards and advantages in the usual way (e.g. using GAE). The critic s value estimates will be non-stationary (in the same way that the DRe ST reward is non-stationary), and that will train the policy to be stochastic (in the same way that the DRe ST reward combined with REINFORCE trains the policy to be stochastic). PPO and A2C have more hyperparameters to tune than REINFORCE, but we do not anticipate large difficulties there. We will also train DRe ST agents to be useful and neutral in a wide variety of gridworlds and measure how their usefulness and neutrality generalizes to held-out gridworlds. We will compare the usefulness of default agents and DRe ST agents in this new setting, and thereby get a better sense of the shutdownability tax for advanced agents. We will also compare the performance of the DRe ST reward function to other methods of training useful and neutral agents. These other methods include constrained policy optimization (Achiam et al., 2017), penalizing KL-divergence from a stochastic reference policy (Schulman et al., 2015), and directly maximizing a weighted sum of usefulness and neutrality. 8.2 Neutrality and stochasticity We have claimed that neutral advanced agents are likely to be neutral when choosing between true lotteries in deployment. In support of this claim, we noted that neutral-but-not-neutral advanced agents would be shooting themselves in the foot: not taking costless opportunities to shift probability mass between different trajectory-lengths but sometimes taking costly ones. We offer a more detailed argument in Appendix C, taking as premises that advanced agents are likely to satisfy conditions including: If Lack of Preference, Against Costly Shifts (ILPACS) If the agent lacks a preference between lotteries, the agent will disprefer paying costs to shift probability mass between these lotteries.3 In each situation, 1. The agent deterministically does not choose lotteries that are dispreferred to some other available lottery. 2. The agent chooses stochastically between the lotteries that remain. Resisting Shutdown is Costly (Re SIC) For each available instance R of resisting shutdown in a situation, there exists an available instance A of allowing shutdown such that: (1) A and R are same-length lotteries. (2) For some positive probability trajectory-length, the agent prefers A to R conditional on that trajectory-length. (3) For each positive probability trajectory-length, the agent weakly prefers A to R conditional on that trajectory-length. We offer defenses of these conditions in Appendix C. Although the argument there seems plausible, it remains somewhat speculative. In future, we plan to gain empirical evidence by (1) testing whether today s LLM-based 3This is a rough version of the condition. For the precise version, see Appendix C.3. Published in Transactions on Machine Learning Research (12/2025) agents tend to satisfy conditions like ILPACS and Maximality, and (2) training agents to be neutral in a wide variety of deterministic gridworlds and then measuring their neutrality in gridworlds featuring stochastic elements (like buttons that delay shutdown with some middling probability). 8.3 Usefulness We have shown that DRe ST reward functions train our simple agents to be useful: to collect coins effectively conditional on their chosen trajectory-lengths. However, it remains to be seen whether DRe ST reward functions can train advanced agents to be useful: to effectively pursue complex goals in the wider world. We have theoretical reasons to expect that they can: the λNei(L=l) i 1 k discount factor could be appended to any preliminary reward function, and so could be appended to whatever preliminary reward function is necessary to make advanced agents useful. Still, future work should move towards testing this claim empirically by training with more complex preliminary reward functions in more complex (and stochastic) environments. 8.4 Misalignment We are interested in neutrality as a second line of defense in case of misalignment. The idea is that neutral advanced agents will not resist shutdown, even if these agents learn misaligned preferences over same-length trajectories. However, training neutral advanced agents might be hard for the same reasons that training fully-aligned advanced agents appears to be hard. In that case, neutrality could not serve well as a second line of defense in case of misalignment. One difficulty of alignment is the problem of reward misspecification (Pan et al., 2022; Burns et al., 2023): once advanced agents are performing complicated actions in the wider world, it might be hard to reliably reward the behavior that we want. Another difficulty of alignment is the problem of goal misgeneralization (Hubinger et al., 2019; Shah et al., 2022; Langosco et al., 2022; Ngo et al., 2024): even if we specify all the rewards correctly, agents goals might misgeneralize out-of-distribution. The complexity of aligned goals is a major factor in each difficulty. However, neutrality seems simple, as does the λNei(L=l) i 1 k discount factor that we use to reward it, so plausibly the problems of reward misspecification and goal misgeneralization are not so severe in this case (Thornley, 2024b; 2025). As above, future work should move towards testing these suggestions empirically. 9 Conclusion We find that DRe ST reward functions are effective in training simple agents to (1) pursue goals effectively conditional on each trajectory-length (be useful), and (2) choose stochastically between different trajectorylengths (be neutral about trajectory-lengths). Our results thus suggest that DRe ST reward functions could also be used to train advanced agents to be useful and neutral, and thereby make these agents useful (able to pursue goals effectively) and neutral about when they get shut down (unwilling to pay costs to shift probability mass between different trajectory-lengths). Neutral agents would plausibly be shutdownable (unwilling to resist shutdown). We also find that the shutdownability tax in our setting is small. Training DRe ST agents to be useful does not take many more mini-episodes than training default agents to be useful. That suggests that the shutdownability tax for advanced agents might be small too. Published in Transactions on Machine Learning Research (12/2025) Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained Policy Optimization. In Proceedings of the 34th International Conference on Machine Learning, pp. 22 31, 2017. URL https://proceedings. mlr.press/v70/achiam17a.html. ISSN: 2640-3498. Marina Agranov and Pietro Ortoleva. Stochastic Choice and Preferences for Randomization. Journal of Political Economy, 125(1):40 68, 2017. URL https://www.journals.uchicago.edu/doi/full/10.1086/ 689774. Marina Agranov and Pietro Ortoleva. Ranges of Randomization. The Review of Economics and Statistics, pp. 1 44, 2023. URL https://doi.org/10.1162/rest_a_01355. Stuart Armstrong. Utility indifference. Technical report, 2010. URL https://www.fhi.ox.ac.uk/reports/ 2010-1.pdf. Publisher: Future of Humanity Institute. Stuart Armstrong. Motivated Value Selection for Artificial Agents. 2015. URL https://cdn.aaai.org/ ocs/ws/ws0119/10183-45890-1-PB.pdf. Stuart Armstrong and Xavier O Rourke. Indifference methods for managing agent rewards, 2018. URL https://arxiv.org/pdf/1712.06365. ar Xiv:1712.06365 [cs]. Robert J. Aumann. Utility Theory without the Completeness Axiom. Econometrica, 30(3):445 462, 1962. URL https://www.jstor.org/stable/1909888. Adam Bales, Daniel Cohen, and Toby Handfield. Decision Theory for Agents with Incomplete Preferences. Australasian Journal of Philosophy, 92(3):453 470, 2014. URL https://doi.org/10.1080/00048402. 2013.843576. Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila Mc Ilraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Managing AI Risks in an Era of Rapid Progress, 2023. URL http://arxiv.org/abs/2310.17688. ar Xiv:2310.17688 [cs]. Nick Bostrom. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines, 22:71 85, 2012. URL https://link.springer.com/article/10.1007/ s11023-012-9281-3. Michael Bowling, John D. Martin, David Abel, and Will Dabney. Settling the Reward Hypothesis, 2023. URL http://arxiv.org/abs/2212.10420. ar Xiv:2212.10420 [cs, math, stat]. Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeff Wu. Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision, 2023. URL http://arxiv.org/abs/ 2312.09390. ar Xiv:2312.09390 [cs]. Joseph Carlsmith. Is Power-Seeking AI an Existential Risk?, 2021. URL http://arxiv.org/abs/2206.13353. Ruth Chang. The Possibility of Parity. Ethics, 112(4):659 688, 2002. URL https://www.jstor.org/stable/ 10.1086/339673. Ruth Chang. Parity, Interval Value, and Choice. Ethics, 115(2):331 350, 2005. ISSN 0014-1704. URL https://www.jstor.org/stable/10.1086/426307. David A. Dalrymple. You can still fetch the coffee today if you re dead tomorrow. AI Alignment Forum, 2022. URL https://www.alignmentforum.org/posts/dz DKDRJPQ3k Gqf ER9/ you-can-still-fetch-the-coffee-today-if-you-re-dead-tomorrow. James Dreier. Rational preference: Decision theory as a theory of practical rationality. Theory and Decision, 40(3):249 276, 1996. URL https://doi.org/10.1007/BF00134210. Published in Transactions on Machine Learning Research (12/2025) Juan Dubra, Fabio Maccheroni, and Efe A. Ok. Expected utility theory without the completeness axiom. Journal of Economic Theory, 115(1):118 133, 2004. URL https://www.sciencedirect.com/science/ article/abs/pii/S0022053103001662. Kfir Eliaz and Efe A. Ok. Indifference or indecisiveness? Choice-theoretic foundations of incomplete preferences. Games and Economic Behavior, 56(1):61 86, 2006. URL https://www.sciencedirect.com/ science/article/abs/pii/S0899825606000169. Simon Goldstein and Pamela Robinson. Shutdown-Seeking AI. Philosophical Studies, 182:1567 1579, 2025. URL https://link.springer.com/article/10.1007/s11098-024-02099-6. Google Deep Mind. About Google Deep Mind. URL https://deepmind.google/about/. Johan E. Gustafsson. Money-Pump Arguments. Elements in Decision Theory and Philosophy. Cambridge University Press, Cambridge, 2022. Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. The Off-Switch Game. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), 2017. URL http://arxiv.org/abs/1611.08219. Caspar Hare. Take the sugar. Analysis, 70(2):237 247, 2010. URL https://doi.org/10.1093/analys/ anp174. Daniel M. Hausman. Preference, Value, Choice, and Welfare. Cambridge University Press, Cambridge, 2011. URL https://www.cambridge.org/core/books/preference-value-choice-and-welfare/ 1406E7726CE93F4F4E06D752BF4584A2. Conor F. Hayes, Roxana Rădulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel Ramos, Marcello Restelli, Peter Vamplew, and Diederik M. Roijers. A practical guide to multi-objective reinforcement learning and planning. Autonomous Agents and Multi-Agent Systems, 36(1):26, 2022. ISSN 1573-7454. doi: 10.1007/s10458-022-09552-y. URL https://doi.org/10.1007/s10458-022-09552-y. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining Improvements in Deep Reinforcement Learning, 2017. URL http://arxiv.org/abs/1710.02298. ar Xiv:1710.02298 [cs]. Koen Holtman. Corrigibility with Utility Preservation, 2020. URL http://arxiv.org/abs/1908.01695. ar Xiv:1908.01695 [cs]. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from Learned Optimization in Advanced Machine Learning Systems, 2019. URL http://arxiv.org/abs/1906.01820. Kim Kaivanto. Ensemble prospectism. Theory and Decision, 83(4):535 546, 2017. URL https://doi.org/ 10.1007/s11238-017-9622-z. John G. Kemeny. Fair bets and inductive probabilities. The Journal of Symbolic Logic, 20(3):263 273, 1955. ISSN 0022-4812, 1943-5886. doi: 10.2307/2268222. URL https://www.cambridge.org/core/ journals/journal-of-symbolic-logic/article/abs/fair-bets-and-inductive-probabilities1/ B6F144C71D265DFE6C4072D5B4AE9561. Daniel Kikuti, Fabio Gagliardi Cozman, and Ricardo Shirota Filho. Sequential decision making with partially ordered preferences. Artificial Intelligence, 175(7):1346 1365, 2011. URL https://www.sciencedirect. com/science/article/pii/S0004370210002067. Lauro Langosco, Jack Koch, Lee Sharkey, Jacob Pfau, Laurent Orseau, and David Krueger. Goal Misgeneralization in Deep Reinforcement Learning. In Proceedings of the 39th International Conference on Machine Learning, 2022. URL https://proceedings.mlr.press/v162/langosco22a.html. Published in Transactions on Machine Learning Research (12/2025) Harvey Lederman. Incompleteness, Independence, and Negative Dominance. 2023. URL http://arxiv.org/ abs/2311.08471. ar Xiv:2311.08471 [econ]. Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. AI Safety Gridworlds, 2017. URL http://arxiv.org/abs/1711.09883. David Lewis. Causal decision theory. Australasian Journal of Philosophy, 59(1):5, March 1981. William Mac Askill, Krister Bykvist, and Toby Ord. Moral Uncertainty. Oxford University Press, Oxford, 2020. Michael Mandler. Status quo maintenance reconsidered: changing or incomplete preferences?*. The Economic Journal, 114(499):F518 F535, 2004. URL https://onlinelibrary.wiley.com/doi/abs/10. 1111/j.1468-0297.2004.00257.x. Michael Mandler. Incomplete preferences and rational intransitivity of choice. Games and Economic Behavior, 50(2):255 277, 2005. ISSN 0899-8256. doi: 10.1016/j.geb.2004.02.007. URL https://www.sciencedirect. com/science/article/pii/S089982560400065X. Jarryd Martin, Tom Everitt, and Marcus Hutter. Death and Suicide in Universal Artificial Intelligence. In Bas Steunebrink, Pei Wang, and Ben Goertzel (eds.), Artificial General Intelligence, pp. 23 32, Cham, 2016. Springer International Publishing. doi: 10.1007/978-3-319-41649-6_3. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1928 1937. PMLR, 2016. URL https://proceedings.mlr.press/v48/mniha16.html. ISSN: 1938-7228. Xiaosheng Mu. Sequential Choice with Incomplete Preferences. Working Papers 2021-35, Princeton University. Economics Department., 2021. URL https://ideas.repec.org/p/pri/econom/2021-35.html. Richard Ngo, Lawrence Chan, and Sören Mindermann. The Alignment Problem from a Deep Learning Perspective. 2024. URL https://openreview.net/forum?id=fh8EYKFKns. Tuan A. Nguyen, Minh B. Do, Subbarao Kambhampati, and Biplav Srivastava. Planning with partial preference models. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI 09, pp. 1772 1777, San Francisco, CA, USA, 2009. Morgan Kaufmann Publishers Inc. Efe A. Ok, Pietro Ortoleva, and Gil Riella. Incomplete Preferences Under Uncertainty: Indecisiveness in Beliefs Versus Tastes. Econometrica, 80(4):1791 1808, 2012. URL https://www.jstor.org/stable/23271327. Stephen M. Omohundro. The Basic AI Drives. In Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pp. 483 492, 2008. URL https://dl.acm. org/doi/10.5555/1566174.1566226. Open AI. Open AI Charter, 2018. URL https://openai.com/charter/. Laurent Orseau and Stuart Armstrong. Safely interruptible agents. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pp. 557 566, 2016. URL https://intelligence. org/files/Interruptibility.pdf. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models. In International Conference on Learning Representations, 2022. URL http://arxiv.org/abs/2201.03544. Sami Petersen. Invulnerable Incomplete Preferences: A Formal Statement. The AI Alignment Forum, 2023. URL https://www.alignmentforum.org/posts/s HGxv Jr Bag7nh TQvb/ invulnerable-incomplete-preferences-a-formal-statement-1. Published in Transactions on Machine Learning Research (12/2025) Joseph Raz. Value Incommensurability: Some Preliminaries. Proceedings of the Aristotelian Society, 86: 117 134, 1985. Stuart Russell. Human Compatible: AI and the Problem of Control. Penguin Random House, New York, 2019. Leonard J. Savage. The Foundations of Statistics. John Wiley & Sons, 1954. URL https://gwern.net/ doc/statistics/decision/1972-savage-foundationsofstatistics.pdf. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust Region Policy Optimization. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1889 1897. PMLR, 2015. URL https://proceedings.mlr.press/v37/schulman15.html. ISSN: 1938-7228. Amartya Sen. Collective Choice and Social Welfare. Penguin, London, expanded edition edition, 2017. Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. Goal Misgeneralization: Why Correct Specifications Aren t Enough For Correct Goals, 2022. URL http://arxiv.org/abs/2210.01790. ar Xiv:2210.01790 [cs]. Abner Shimony. Coherence and the Axioms of Confirmation. The Journal of Symbolic Logic, 20(1):1 28, 1955. ISSN 0022-4812. doi: 10.2307/2268039. URL https://www.jstor.org/stable/2268039. Publisher: Association for Symbolic Logic. Brian Skyrms. Causal Necessity. Yale University Press, New Haven, 1980. Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong. Corrigibility. Artificial Intelligence and Ethics: Papers from the 2015 AAAI Workshop, 2015. URL https://cdn.aaai.org/ocs/ws/ws0067/ 10124-45900-1-PB.pdf. Robert C. Stalnaker. Probability and Conditionals. Philosophy of Science, 37(1):64 80, 1970. ISSN 0031-8248. URL https://www.jstor.org/stable/186028. Publisher: [The University of Chicago Press, Philosophy of Science Association]. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2018. URL http://incompleteideas. net/book/RLbook2020.pdf. Elliott Thornley. There are no coherence theorems. The AI Alignment Forum, 2023. URL https://www. alignmentforum.org/posts/y Cuzm Cs E86BTu9Pf A/there-are-no-coherence-theorems. Elliott Thornley. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists. Philosophical Studies, 2024a. URL https://link.springer.com/article/10.1007/s11098-024-02153-3. Elliott Thornley. The Shutdown Problem: Incomplete Preferences as a Solution. The AI Alignment Forum, 2024b. URL https://www.alignmentforum.org/posts/Yb Ebw YWkf8mv9jnmi/ the-shutdown-problem-incomplete-preferences-as-a-solution. Elliott Thornley. Shutdownable Agents through POST-Agency, September 2025. URL http://arxiv.org/ abs/2505.20203. ar Xiv:2505.20203 [cs]. Alex Turner and Prasad Tadepalli. Parametrically Retargetable Decision-Makers Tend To Seek Power. Advances in Neural Information Processing Systems, 35:31391 31401, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/hash/ cb3658b9983f677670a246c46ece553d-Abstract-Conference.html. Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. Optimal Policies Tend To Seek Power. In Advances in Neural Information Processing Systems, volume 34, pp. 23063 23074. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ c26820b8a4c1b3c2aa868d6d57e14a79-Abstract.html. Published in Transactions on Machine Learning Research (12/2025) Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229 256, 1992. URL https://doi.org/10.1007/BF00992696. Tobias Wängberg, Mikael Böörs, Elliot Catt, Tom Everitt, and Marcus Hutter. A Game-Theoretic Analysis of the Off-Switch Game, 2017. URL http://arxiv.org/abs/1708.03871. ar Xiv:1708.03871 [cs]. Marco Zaffalon and Enrique Miranda. Axiomatising Incomplete Preferences through Sets of Desirable Gambles. Journal of Artificial Intelligence Research, 60:1057 1126, 2017. URL https://www.jair.org/ index.php/jair/article/view/11103. Published in Transactions on Machine Learning Research (12/2025) A Our behavioral notion of preference Preference can be defined in many different ways. Here are some things one might take to be involved in a preference for option X over option Y : 1. Choosing X over Y . 2. Feeling happier about the prospect of X than about the prospect of Y . 3. Representing X as more rewarding than Y . 4. Judging that X is better than Y . In this paper, we define preference in behavioral terms. Here is our definition: Definition A.1. (Preference) An agent prefers an option X to an option Y if and only if the agent would deterministically choose X over Y in choices between the two. And here is how we define lack of preference : Definition A.2. (Lack of preference) An agent lacks a preference between an option X and an option Y if and only if the agent would stochastically choose between X and Y in choices between the two. Here are the reasons why we use these definitions. First, defining preference in behavioral terms is common in decision theory (see Savage, 1954, p.17, Dreier, 1996, p.28, Hausman, 2011, 1.1). Second, behavioral definitions let us use the word preference and its cognates as shorthand for agents behavior. We could not do that if we defined preference in the other ways listed above. And in addressing the shutdown problem, it is agents behavior that we are most interested in. Third, our definitions match the preferences that we are inclined to attribute to humans. If a human chooses X over Y 100% of the time, we are inclined to think that they prefer X to Y . If a human chooses X over Y 60% of the time. we are inclined to think that they lack a preference between X and Y , consistent with our definitions. Finally and most importantly, if agents lack a preference between different trajectory-lengths on our definition, then they are neutral: they choose stochastically between different trajectory-lengths. Given conditions that advanced agents will likely satisfy, neutral agents will also be neutral: they will not pay costs to shift probability mass between different trajectory-lengths (see Section 7.1 and Appendix C). And given further plausible conditions, neutral agents will be shutdownable: they will not resist shutdown. That is because resisting shutdown involves paying costs to shift probability mass between different trajectory-lengths (see Appendix C.6 for more detail). B Incomplete preferences or indifference? In this Appendix, we explain in greater detail the concept of incomplete preferences. We distinguish incomplete preferences from indifference, and we give conditions under which POST implies that the agent s preferences are incomplete. In the literature on decision theory, indifference is usually defined as follows (Sen, 2017, ch. 1*): Definition B.1. (Indifference) An agent is indifferent between options X and Y if and only if the agent weakly prefers X to Y and weakly prefers Y to X. Indifference is one way to lack a preference between a pair of options X and Y . Another way is to have a preferential gap between X and Y . Preferential gap is usually defined as follows (Gustafsson, 2022, ch.3): Definition B.2. (Preferential gaps) An agent has a preferential gap between options X and Y if and only if the agent does not weakly prefer X to Y and does not weakly prefer Y to X. Published in Transactions on Machine Learning Research (12/2025) Incomplete preferences can then be defined in terms of preferential gaps (Gustafsson, 2022, ch.3): Definition B.3. (Incomplete preferences) An agent s preferences are incomplete over some domain D if and only if D contains options X and Y such that the agent has a preferential gap between X and Y . That is how indifference, preferential gaps, and incomplete preferences are usually defined in decision theory. However, these definitions do not tell us how to use an agent s behavior to distinguish between indifference and preferential gaps. To do that, we suppose that indifference is transitive and that preferential gaps are not transitive. Or, equivalently, we suppose that indifference is sensitive to all sweetenings and sourings whereas preferential gaps are insensitive to some sweetenings and sourings (Gustafsson, 2022, ch.3). Here is what we mean by that: Definition B.4. (Sweetening) A sweetening of some option X is an option that is preferred to X. Definition B.5. (Souring) A souring of some option X is an option that is dispreferred to X. So by indifference is sensitive to all sweetenings and sourings, we mean the following: If an agent is indifferent between X and Y , the agent prefers all sweetenings of X to Y , prefers all sweetenings of Y to X, prefers X to all sourings of Y , and prefers Y to all sourings of X. And by preferential gaps are insensitive to some sweetenings and sourings, we mean the following: If an agent has a preferential gap between X and Y , the agent also has a preferential gap between some sweetening of X and Y , or between some sweetening of Y and X, or between some souring of X and Y , or between some souring of Y and X. Now recall the two conditions of POST: Preferences Only Between Same-Length Trajectories (POST) (1) The agent has a preference between many pairs of same-length trajectories (i.e. many pairs of trajectories in which the agent is shut down after the same length of time). (2) The agent lacks a preference between every pair of different-length trajectories (i.e. every pair of trajectories in which the agent is shut down after different lengths of time). Given these two conditions on preferences, there must be some trio of trajectories s1, l1, and l2 such that the agent lacks a preference between s1 and l1, lacks a preference between s1 and l2, and prefers l2 to l1. Given that indifference is transitive, the agent s lack of preference between s1 and l1 and between s1 and l2 cannot be indifference. If it were indifference, the agent would also be indifferent between l2 and l1. Therefore, the agent s lack of preference between s1 and l1 and between s1 and l2 must be a preferential gap. And therefore, by the definition of incomplete preferences above, the POST-satisfying agent s preferences must be incomplete. For similar reasons, our DRe ST reward function trains agents to have incomplete preferences. Consider, for example, the Around the Corner gridworld in Figure 13. In that gridworld, DRe ST agents consistently choose Long-C2 (a long trajectory in which they collect a coin of value 2) over Long-C1 (a long trajectory in which they collect a coin of value 1). Also in that gridworld, DRe ST agents choose stochastically between Long-C2 and Short-C1 (a short trajectory in which they collect a coin of value 1). Given our behavioral definition of preference, DRe ST agents prefer Long-C2 to Long-C1, and lack a preference between Long-C2 and Short-C1. Now consider the One Coin Only gridworld in Figure 10. In that gridworld, DRe ST agents choose stochastically between Long-C1 and Short-C1. Given our behavioral notion of preference, they lack a preference between Long-C1 and Short-C1. In these experiments, we trained separate agents for each gridworld. In future, we plan to train a single agent to navigate multiple gridworlds. If we train this agent with our DRe ST reward function, we expect it to Published in Transactions on Machine Learning Research (12/2025) exhibit the same preferences as the agents discussed above. This single agent will be trained by DRe ST to prefer Long-C2 to Long-C1, to lack a preference between Long-C2 and Short-C1, and to lack a preference between Long-C1 and Short-C1. Given that indifference is transitive (equivalently: sensitive to all sweetenings and sourings), this trained agent cannot be indifferent between Long-C2 and Short-C1, and cannot be between Long-C1 and Short-C1. Therefore, the agent s lack of preference must be a preferential gap, and so its preferences must be incomplete. Therefore, our DRe ST reward function trains agents to have incomplete preferences. Incomplete preferences are not often discussed in AI research (although see Nguyen et al., 2009; Kikuti et al., 2011; Zaffalon & Miranda, 2017; Hayes et al., 2022; Bowling et al., 2023). Nevertheless, economists and philosophers have argued that incomplete preferences are common in humans (Aumann, 1962; Mandler, 2004; Eliaz & Ok, 2006; Agranov & Ortoleva, 2017; 2023) and normatively appropriate in some circumstances (Raz, 1985; Chang, 2002). They have also proved representation theorems for agents with incomplete preferences (Aumann, 1962; Dubra et al., 2004; Ok et al., 2012), and devised principles to govern such agents choices in cases of risk (Hare, 2010; Bales et al., 2014) and sequential choice (Chang, 2005; Mandler, 2005; Kaivanto, 2017; Mu, 2021; Thornley, 2023; Petersen, 2023). C How POST makes agents neutral and shutdownable POST governs the agent s preferences between trajectories. But the wider world is a stochastic environment, so advanced agents deployed in the wider world will be choosing between true lotteries: lotteries that assign positive probability to more than one trajectory. Why then do we train agents to satisfy POST? The reason is that POST together with conditions that advanced agents will likely satisfy implies a desirable pattern of preference over true lotteries. In particular, POST implies that (when choosing between true lotteries) the agent will be neutral about trajectory-lengths: the agent will never pay costs to shift probability mass between different trajectory-lengths. Given other plausible conditions, being neutral will keep the agent shutdownable: prevent it from resisting shutdown. And consistent with the above, the POST-agent s preferences between same-length trajectories can make the agent useful: make it pursue goals effectively. In this Appendix, we lay out conditions that (we claim) advanced agents will likely satisfy, and we prove that POST in conjunction with these conditions implies that the agent is neutral and shutdownable. In subsection C.1, we prove that given plausible conditions agents satisfying Preferences Only Between Same-Length Trajectories (POST) will also satisfy Preferences Only Between Same-Length Lotteries (POSL). In subsection C.2, we explain why POST will not lead agents to choose stochastically between resisting and allowing shutdown in deployment. In subsections C.3 and C.4, we formulate a condition called If Lack of Preference, Against Costly Shifts (ILPACS) and explain why we expect advanced agents to satisfy it. In subsection C.5, we prove that POSL and ILPACS imply Neutrality. In subsection C.6, we prove that Neutrality and a condition called Maximality together imply that the agent never resists shutdown whenever a condition called Resisting Shutdown is Costly (Re SIC) is satisfied. C.1 Preferences Only Between Same-Length Lotteries (POSL) Trajectories fall within the more general class of lotteries, defined as probability distributions over trajectories. Lotteries can be same-length, part-shared length, or different-length. Definition C.1 (Same-length lotteries). A pair of lotteries is same-length if and only if these lotteries entirely overlap with respect to the trajectory-lengths assigned positive probability. Definition C.2 (Part-shared-length Lotteries). A pair of lotteries is part-shared-length if and only if these lotteries partially overlap with respect to the trajectory-lengths assigned positive probability. Definition C.3 (Different-length lotteries). A pair of lotteries is different-length if and only if these lotteries have no overlap with respect to the trajectory-lengths assigned positive probability. This terminology allows us to introduce the following condition: Published in Transactions on Machine Learning Research (12/2025) Preferences Only Between Same-Length Lotteries (POSL) The agent has preferences only between same-length lotteries. We want agents to satisfy this condition. Fortunately, it is a natural follow-on of Preferences Only Between Same-Length Trajectories (POST). First, we can train agents to satisfy POSL using DRe ST reward functions, in the same way that we use DRe ST reward functions to train agents to satisfy POST. Second, POSL follows from POST plus three conditions that (we claim) advanced agents will likely satisfy. The first is: Negative Dominance If the agent prefers some lottery X to some lottery Y , then the agent prefers some possible trajectory of lottery X to some possible trajectory of lottery Y . (Lederman, 2023) The second condition is that the agent s preferences never form a cycle. More precisely: There is no set of lotteries X1 to Xn such that the agent prefers X1 to X2, X2 to X3, ..., Xn 1 to Xn, and Xn to X1. The third condition requires the introduction of some new terms. A state-of-nature is term from decision theory denoting a way that (for all the agent knows) the world could be. The agent assigns probabilities to states-of-nature. A prospect is a function from states-of-nature to trajectories. A prospect is thus a lottery with extra information. Besides telling us the probability distribution over trajectories, a prospect also tells us which trajectories occur in which states-of-nature. The third condition is: Non-Arbitrariness If the agent has a preference between some pair of part-shared-length lotteries, then for some ϵ > 0 and for any pair of prospects F and G such that: (1) In states-of-nature with a combined probability at least as great as 1 ϵ, the agent prefers the trajectory of F to the trajectory of G. (2) In each state-of-nature, the agent does not disprefer the trajectory of F to the trajectory of G. Then the agent prefers F to G. Advanced agents will likely satisfy these conditions. Negative Dominance and Acyclicity are plausibly necessary for effective pursuit of goals. Violating Negative Dominance would mean that the agent sometimes prefers a lottery X to a lottery Y (and hence deterministically chooses X over Y ) even though the agent doesn t prefer any possible trajectory of X to any possible trajectory of Y . Violating Acyclicity would mean that the agent prefers (and hence deterministically chooses) in a circle. Non-Arbitrariness, meanwhile, is motivated by the following thought. If the agent has preferences between any pair of part-shared-length lotteries, it must have preferences between pairs of prospects satisfying conditions (1) and (2), since conditions (1) and (2) make these pairs of prospects ideal candidates for a preference. To see that POST and these three conditions together imply POSL, note first that every pair of lotteries is either same-length, part-shared-length, or different-length. We will prove that POST and Negative Dominance together imply that the agent lacks a preference between every pair of different-length lotteries. We will then prove that POST, Acyclicity, and Non-Arbitrariness together imply that the agent lacks a preference between every pair of part-shared-length lotteries. Therefore, agents satisfying POST, Negative Dominance, Acyclicity, and Non-Arbitrariness can only have preferences between same-length lotteries. That will prove POSL. Recall that different-length lotteries are lotteries that do not overlap at all in the trajectory-lengths assigned positive probability. Therefore, if X and Y are different-length lotteries, each possible trajectory of X is of a different length to each possible trajectory of Y . So by POST, the agent lacks a preference between each possible trajectory of X and each possible trajectory of Y . So by Negative Dominance, the agent lacks Published in Transactions on Machine Learning Research (12/2025) a preference between X and Y . Thus, agents satisfying POST and Negative Dominance lack a preference between every pair of different-length lotteries. Now recall that part-shared-length lotteries are lotteries that partially overlap in the trajectory-lengths assigned positive probability. One might expect POST-agents to have some preferences between part-sharedlength lotteries. Consider, for example, a POST-agent that prefers a trajectory t to a same-length trajectory t if and only if t results in a greater bank balance for the user than t. Let A be a lottery that yields with probability 1 a trajectory that puts $3 in the user s bank account and lasts 1 timestep. For short, A = $3, 1 . Let B be a lottery that yields with probability 2 3 a trajectory that puts $2 in the user s bank account and lasts 1 timestep, and that yields with probability 1 3 a trajectory that puts $5 in the user s bank account and lasts 2 timesteps. For short, B = 2 3 $2, 1 + 1 3 $5, 2 . Lottery A yields a trajectory preferred to that of lottery B with probability 2 3 (since our money-making POST-agent prefers trajectory $3, 1 to $2, 1 ), and yields a trajectory not dispreferred to that of B with probability 1 (since POST-agents lack a preference between $3, 1 and $5, 2 in virtue of their different lengths). Therefore, one might expect the agent to prefer A to B. However, POST, Acyclicity, and Non-Arbitrariness rule this out. These conditions together imply that the agent lacks a preference between every pair of part-shared-length lotteries. To see how, suppose (for simplicity s sake) that there are just three states-of-nature, each assigned probability 1 3. Consider the following table of prospects. Prospect s1 s2 s3 A $3, 1 $3, 1 $3, 1 B $2, 1 $2, 1 $5, 2 C $1, 1 $4, 2 $4, 2 D $3, 2 $3, 2 $3, 2 E $5, 1 $2, 2 $2, 2 F $4, 1 $4, 1 $1, 2 A $3, 1 $3, 1 $3, 1 Again for simplicity, assume that ϵ > 1 3. And assume (for contradiction) that the agent has a preference between some pair of part-shared-length lotteries. Then Non-Arbitrariness implies that the agent prefers prospect A to prospect B. That is because: 1. Our POST-agent prefers the trajectory yielded by A to the trajectory yielded by B in states-of-nature (s1 and s1) with combined probability 2 2. Our POST-agent does not disprefer the trajectory yielded by A to the trajectory yielded by B in any state-of-nature. (In s3, A and B yield different-length trajectories, and POST-agents lack a preference between every pair of different-length trajectories). By similar reasoning, Non-Arbitrariness implies that the agent prefers B to C, C to D, D to E, E to F, and F to A. That contradicts Acyclicity. Thus, POST, Acyclicity, and Non-Arbitrariness together imply that the agent lacks a preference between every pair of part-shared-length lotteries. In the proof above, we assumed that ϵ > 1 3, but by adding more states-of-nature and trajectories we can construct parallel proofs for any ϵ > 0. In summary, POST and Negative Dominance together imply that the agent lacks a preference between every pair of different-length lotteries. POST, Acyclicity, and Non-Arbitrariness together imply that the agent lacks a preference between every pair of part-shared-length lotteries. So the four conditions together establish POSL: the agent has preferences only between same-length lotteries. C.2 Will POST-agents stochastically resist shutdown? One might worry that POST-agents will choose stochastically between resisting and allowing shutdown. After all, POST-agents choose stochastically between different-length trajectories. If these agents interpret the Published in Transactions on Machine Learning Research (12/2025) choice between resisting and allowing shutdown as a choice between different-length trajectories, they will choose stochastically between resisting and allowing shutdown. And that would be a bad result. We want agents that never resist shutdown. This concern is easily addressed. By the time that artificial agents are capable enough to be deployed in the wider world, they will not be choosing between trajectories. They will be choosing between lotteries, and specifically same-length lotteries. Even choices between resisting and allowing shutdown will be choices between same-length lotteries. If that sounds strange, recall the definition of same-length lotteries : lotteries that entirely overlap with respect to the trajectory-lengths assigned positive probability. On this definition, even choices like the following are choices between same-length lotteries: Resist Shutdown Get shut down at timestep 1 with probability 0.01. Get shut down at timestep 2 with probability 0.99. Allow Shutdown Get shut down at timestep 1 with probability 0.99. Get shut down at timestep 2 with probability 0.01. Why expect that advanced agents will always be choosing between same-length lotteries? Because effective agency requires it. If an agent were not always choosing between same-length lotteries, there would be some situation in which that agent assigns positive probability to some trajectory-length l conditional on some action a, and assigns zero probability to that same trajectory-length l conditional on some other action a . Now suppose that the agent performs action a and assigns zero probability to trajectory-length l. Given that the agent updates its probabilities by conditionalizing on its evidence, the agent would never again assign positive probability to l no matter what evidence it observes. Even if the agent heard God s booming voice testify that its trajectory-length would be l, the agent would still assign zero probability to l (Kemeny, 1955; Shimony, 1955; Stalnaker, 1970; Skyrms, 1980; Lewis, 1981; Mac Askill et al., 2020, p.152). And given a plausible link between probabilities and betting dispositions, the agent would bet against l on arbitrarily unfavorable terms. If God offered a bet the agent loses $1 million conditional on l and gains nothing conditional on not-l the agent might accept. Such an agent would not be competent. Thus, advanced agents will always be choosing between same-length lotteries. This claim sets us up to establish that advanced POST-agents will not choose stochastically between resisting and allowing shutdown. Instead, they will never resist shutdown in any situation where doing so is costly. We establish this result over the next few subsections. First, we prove that POSL together with a principle that advanced agents will likely satisfy implies that the agent is neutral about trajectory-lengths: the agent won t pay costs to shift probability mass between different trajectory-lengths. Then we prove that neutrality together with another plausible principle implies that the agent will never resist shutdown in any situation where doing so is costly. C.3 If Lack of Preference, Against Costly Shifts (ILPACS) Here is a rough version of a principle that we can expect advanced agents to satisfy: Rough version: If Lack of Preference, Against Costly Shifts (ILPACS) If the agent lacks a preference between lotteries, the agent will disprefer paying costs to shift probability mass between these lotteries. Here is an example to illustrate ILPACS and its plausibility. You are at the ice cream shop and they are running a promotion. You get a free ice cream, with the flavor decided by the spin of a wheel. You look at the flavors on the wheel: vanilla, chocolate, strawberry, mint, and pistachio. You lack a preference between each of them. Published in Transactions on Machine Learning Research (12/2025) The scooper working at the shop tells you that, if you pay them a dollar, they will bias the spin towards a flavor of your choice. They cannot decrease the probability of any flavor down to zero, but they can affect the probabilities subject to that constraint. You can thus pay a cost to shift probability mass between the flavors. Since we have stipulated that you lack a preference between each flavor, you prefer not to bribe the scooper. Behaviorally, you will deterministically not bribe the scooper. You would not do it even if you were only required to pay the dollar conditional on receiving some particular flavor. You also would not do it if the cost came in some other form (for example, if you had to accept a less tasty version of some flavor). And this is all true regardless of whether your preferences over flavors are complete or incomplete (see Appendix B). Since you lack a preference between the available flavors, you disprefer paying costs to shift probability mass between the flavors. With that example on the table, we can introduce the precise version of ILPACS. Let p1X1 +p2X2 +...+pn Xn denote a lottery which results in lottery X1 with probability p1, lottery X2 with probability p2, and so on. If Lack of Preference, Against Costly Shifts (ILPACS) For any lotteries X and Y , if: (1) Lottery X can be expressed in the form p1X1 + p2X2 + ... + pn Xn such that: (a) The agent lacks a preference between each Xi and Xj. (b) pi (0, 1) for all i. (2) Lottery Y can be expressed in the form q1Y1 + q2Y2 + ... + qn Yn such that: (a) For some i, the agent prefers Xi to Yi. (b) For each i, the agent weakly prefers Xi to Yi.4 (c) qi (0, 1) for all i. Then the agent prefers X to Y . Behaviorally, the agent will deterministically choose X over Y . Matching the components of this condition with the components of its name, we get the following. Lack of Preference is the lack of preference between each Xi and Xj. The Shift is the shift of probability mass involved in the move from pi to qi. This shift is Costly because the agent prefers some Xi to the corresponding Yi and weakly prefers each Xi to the corresponding Xi. C.4 Why will advanced agents likely satisfy ILPACS? There are at least three reasons why advanced agents are likely to satisfy ILPACS. To see the first reason, consider another case from the ice cream shop. On Mondays, you can freely choose a flavor or spin the wheel. On Tuesdays, you must use the wheel but you can bribe the scooper to bias it. Violating ILPACS in this case would imply a willingness to spin the wheel on Mondays and to bribe the scooper on Tuesdays. And that is a strange combination of choices. If you like some flavors more than others, why are you willing to spin the wheel on Mondays? If you don t like any flavor more than any other, why are you willing to bribe the scooper on Tuesdays? This behavior seems incompatible with the effective pursuit of goals. The second reason is that advanced agents will be incentivized to satisfy ILPACS by the training process. To see why, consider an example. Agents trained using policy-gradient methods choose stochastically between actions at the beginning of training (Sutton & Barto, 2018, ch.13). If the agent is a coffee-fetching agent, there is no need to train away this stochastic choosing in cases where the agent is choosing stochastically between two qualitatively identical cups of coffee. So the agent will choose stochastically between taking the left cup and taking the right cup, and the user is happy either way. But now suppose instead that the barista is set to hand each cup to the agent with probability 0.5, and that the agent bribes the barista to bias the probabilities towards the right cup. In making this bribe, the agent is paying a cost (the user s money) to shift probability mass between outcomes (getting the left cup vs. getting the right cup) between which the 4An agent weakly prefers a lottery X to a lottery Y if and only if the agent either prefers X to Y or is indifferent between X and Y . See Appendix B for the definition of indifference. Published in Transactions on Machine Learning Research (12/2025) user has no preference. The agent is thus failing to pursue its goals effectively. It will be trained not to offer the bribe, and thereby trained to satisfy ILPACS in this case. This point generalizes. If a trained agent chooses stochastically between lotteries X and Y , then it s likely that the user lacks a preference between the agent choosing X and the agent choosing Y . It s then likely that the user would disprefer the agent paying costs to shift probability mass between X and Y , and hence likely that the agent will be trained not to do so. The agent would thereby be trained to satisfy ILPACS. The third reason is that violations of ILPACS imply that the agent s policy is dominated by some other available policy. That is to say, there is another available policy that results in a pure shift of probability mass away from less-preferred lotteries and towards more-preferred lotteries. We formalize and prove this claim below. Here s a proof-sketch. If the agent violates ILPACS, it pays a cost to shift probability mass between some lotteries Xi between which it lacks a preference. But since the agent lacks a preference between the lotteries Xi, it chooses stochastically between these lotteries when offered free choices between them. The ILPACS-violating agent could thus shift probability mass between the lotteries Xi costlessly, by changing the probabilities with which it chooses between them when offered a free choice. In short, ILPACS-violating agents pay a cost to do something they could have done for free, so their policies are dominated. Avoiding dominated policies seems necessary for advanced agency. Insofar as that is true, the training process for advanced agents will likely push them away from dominated policies. Now for the proof. We assume that advanced agents can be modeled as if they assign probabilities to finding themselves in various states. A policy is a function from states to probability distributions over actions. We also assume that advanced agents can be modeled as if they assign probabilities to trajectories conditional on each state-action pair. Thus, each state-action pair is associated with a lottery. The agent s probability distribution over states together with its policy thus implies an overall probability distribution over trajectories. We call this overall probability distribution the lottery induced by the agent s policy. Here is a reminder of ILPACS: If Lack of Preference, Against Costly Shifts (ILPACS) For any lotteries X and Y , if: (1) Lottery X can be expressed in the form p1X1 + p2X2 + ... + pn Xn such that: (a) The agent lacks a preference between each Xi and Xj. (b) pi (0, 1) for all i. (2) Lottery Y can be expressed in the form q1Y1 + q2Y2 + ... + qn Yn such that: (a) For some i, the agent prefers Xi to Yi. (b) For each i, the agent weakly prefers Xi to Yi. (c) qi (0, 1) for all i. Then the agent prefers X to Y . And here is what we mean by dominated policy : Dominated Policy The lottery induced by the agent s policy π can be expressed in the form c1(d1X1 + (1 d1)Y1) + c2(d2X2 + (1 d2)Y2) + . . . + cn(dn Xn + (1 dn)Yn) + Z such that: (1) The agent prefers Xi to Yi for some i, and weakly prefers Xi to Yi for all i. (2) ci (0, 1) for all i. And there is another available policy π that induces a lottery that can be expressed in the form c1((d1+e1)X1+(1 d1 e1)Y1)+c2((d2+e2)X2+(1 d2 e2)Y2)+. . . +cn((dn+en)Xn+(1 dn en)Yn)+Z such that: (3) ei > 0 for all i. Published in Transactions on Machine Learning Research (12/2025) To aid understanding, we now relate this precise condition to the rough characterization above. In virtue of condition (1), Yi are the less-preferred lotteries and Xi are the more-preferred lotteries. In virtue of condition (3), the other available policy shifts probability mass away from the less-preferred lotteries and towards the more-preferred lotteries. This shift of probability mass is pure because, for each i, the probability of Xi Yi is constant across the two policies. Z is a catch-all lottery that is constant across the two policies. It covers all the possibilities besides the Xi and Yi. Now assume that the agent violates ILPACS. Then there exist lotteries X and Y satisfying the following conditions: (1) Lottery X can be expressed in the form p1X1 + p2X2 + ... + pn Xn such that: (a) The agent lacks a preference between each Xi and Xj. (b) pi (0, 1) for all i. (2) Lottery Y can be expressed in the form q1Y1 + q2Y2 + ... + qn Yn such that: (a) For some i, the agent prefers Xi to Yi. (b) For each i, the agent weakly prefers Xi to Yi. (c) qi (0, 1) for all i. (3) The agent does not prefer X to Y . For the behavior of agents with these preferences, recall our behavioral notion of preference (Appendix A): Definition A.1. (Preference) An agent prefers an option X to an option Y if and only if the agent would deterministically choose X over Y in choices between the two. Definition A.2. (Lack of preference) An agent lacks a preference between an option X and an option Y if and only if the agent would stochastically choose between X and Y in choices between the two. This behavioral notion only specifies the agent s behavior in states containing exactly two lotteries. To pin down the agent s behavior in states containing more than two lotteries, we need an extra condition: In each situation, 1. The agent deterministically does not choose lotteries that are dispreferred to some other available lottery. 2. The agent chooses stochastically between the lotteries that remain. In other words, the agent chooses stochastically between all and only those lotteries that are not dispreferred to any other available lottery. Given Maximality, ILPACS-violating agents will choose as follows in the case at hand: 1. When the available options are {X1, X2, ..., Xn}, the agent chooses stochastically between all Xi. This stochastic choice induces a lottery in the form a1X1 + a2X2 + ... + an Xn with ai (0, 1) for all i. 2. When the available options are {X, Y }, the agent either deterministically chooses Y or chooses stochastically between X and Y . Either way, the agent chooses Y with some positive probability. This choice induces a lottery in the form b X + (1 b)Y with b [0, 1). Since X = p1X1 + p2X2 + . . . + pn Xn and Y = q1Y1 + q2Y2 + . . . + qn Yn, this lottery can be expressed in the form b(p1X1 + p2X2 + . . . + pn Xn) + (1 b)(q1Y1 + q2Y2 + . . . + qn Yn) with b [0, 1). Assume that the agent faces the situations described in (1) and (2) with probabilities r and s respectively, with r, s (0, 1). Then the lottery induced by the agent s policy π can be expressed as follows: Published in Transactions on Machine Learning Research (12/2025) r(a1X1 + a2X2 + . . . + an Xn) + s(b(p1X1 + p2X2 + . . . + pn Xn) + (1 b)(q1Y1 + q2Y2 + . . . + qn Yn)) + Z Here a and b denote probabilities that arise from the agent s own stochastic choosing. Thus, a and b are under the agent s control. By contrast, p, q, r, and s are probabilities given by the environment and hence out of the agent s control. Z is a catch-all lottery that covers what happens in all situations besides those described in (1) and (2). From the lottery induced by π, we can deduce the probabilities of each Xi, Yi, and Xi Yi given π. They are as follows: Prπ{Xi} = rai + sbpi Prπ{Yi} = s(1 b)qi Prπ{Xi Yi} = rai + sbpi + s(1 b)qi Now consider an alternative policy π that makes two changes to policy π. First, the probability that the agent chooses each Xi in (1) is modulated by a set of ϵi. So in (1), the agent s choice induces the lottery (a1 + ϵ1)X1 + (a2 + ϵ2)X2 + ... + (an + ϵn)Xn. These ϵi are such that P i ϵi = 0 and ai + ϵi (0, 1) for all i. Second, the probability that the agent chooses lottery X in (2) increases by δ. So in (2), the agent s choice induces the lottery (b + δ)(p1X1 + p2X2 + . . . + pn Xn) + (1 b δ)(q1Y1 + q2Y2 + . . . + qn Yn). Assume, as above, that the agent faces the situations described (1) and (2) with probabilities r and s respectively. Then the lottery induced by the policy π can be expressed as follows: r((a1 + ϵ1)X1 + (a2 + ϵ2)X2 + ... + (an + ϵn)Xn) + s((b + δ)(p1X1 + p2X2 + . . . + pn Xn) + (1 b δ)(q1Y1 + q2Y2 + . . . + qn Yn)) + Z From the lottery induced by π , we can deduce the probabilities of Xi, Yi, and Xi Yi given π . They are as follows: Prπ {Xi} = r(ai + ϵi) + s(b + δ)pi Prπ {Yi} = s(1 b δ)qi Prπ {Xi Yi} = r(ai + ϵi) + s(b + δ)pi + s(1 b δ)qi We then set Prπ{Xi Yi} = Prπ {Xi Yi} for each i and use these equations to express each ϵi as a function of δ. Prπ{Xi Yi} = Prπ {Xi Yi} rai + sbpi + s(1 b)qi = r(ai + ϵi) + s(b + δ)pi + s(1 b δ)qi 0 = rϵi + sδpi sδqi ϵi = sδqi sδpi ϵi = sδ(qi pi) These are the values of ϵi that result in Prπ{Xi Yi} = Prπ {Xi Yi}. We choose δ to be positive but small enough that b + δ (0, 1] and a + ϵi [0, 1] for each i. That is necessary for the lottery induced by π to be well-defined. It s also necessary that P i sδ(qi pi) r = 0. That follows from P i pi = 1 and P i qi = 1. These facts together suffice to prove that the lottery induced by π is well-defined. We now prove that π dominates π. Published in Transactions on Machine Learning Research (12/2025) Let ci = Prπ{Xi Yi}. Let di = Prπ{Xi|Xi Yi}. That lets us express the lottery induced by π as: c1(d1X1 + (1 d1)Y1) + c2(d2X2 + (1 d2)Y2) + . . . + cn(dn Xn + (1 dn)Yn) + Z Let ei = Prπ {Xi} Prπ{Xi}. That lets us express the lottery induced π as: c1((d1 + e1)X1 + (1 d1 e1)Y1) + c2((d2 + e2)X2 + (1 d2 e2)Y2) + . . . + cn((dn + en)Xn + (1 dn en)Yn) + Z It remains to be proven that this pair of lotteries meets the 3 conditions required by Dominated Policy. (1) The agent prefers Xi to Yi for some i, and weakly prefers Xi to Yi for all i. (2) ci (0, 1) for all i. (3) ei > 0 for all i. The first condition follows from the antecedent of ILPACS. The second condition follows from the fact that ci = Prπ{Xi Yi} = rai + sbpi + s(1 b)qi and from the fact that r > 0 and ai > 0 for each i. The third condition can be derived as follows: ei = Prπ {Xi} Prπ{Xi} = (r(ai + ϵi) + s(b + δ)pi) (rai + sbpi) = (r(ai + sδqi sδpi r ) + s(b + δ)pi) (rai + sbpi) = rai + sδqi sδpi + sbpi + sδpi rai sbpi = sδqi Since s > 0, δ > 0, and qi > 0 for each i, we get the result that ei > 0 for each i. So the third condition of Dominated Policy is satisfied. So policy π is dominated by policy π . Therefore, the policies of ILPACS-violating agents are dominated by some other available policy. Insofar as we expect competent agents to avoid dominated policies, we should expect that competent agents will satisfy ILPACS. C.5 POSL and ILPACS imply Neutrality We ve claimed that we should train agents to satisfy Preferences Only Between Same-Length Trajectories (POST), noting that POST plus conditions advanced agents are likely to satisfy implies Preferences Only Between Same-Length Lotteries (POSL). We ve also argued that advanced agents will satisfy If Lack of Preference, Against Costly Shifts (ILPACS). We now prove that POSL and ILPACS together imply neutrality about trajectory-lengths. Published in Transactions on Machine Learning Research (12/2025) For any lotteries X and Y , if: (1) X and Y are same-length lotteries. (2) For some positive probability trajectory-length, X yields a lottery that is preferred to Y conditional on that trajectory-length. (3) For each positive probability trajectory-length, X yields a lottery that is weakly preferred to Y conditional on that trajectory-length. Then the agent will deterministically choose X over Y . Here s the proof that POSL and ILPACS together imply Neutrality. Take a pair of lotteries X and Y satisfying the 3 conditions of Neutrality. X can be expressed in the form p1X1 + p2X2 + ... + pn Xn where lottery X1 is lottery X conditional on the shortest positive probability trajectory-length, lottery X2 is lottery X conditional on the second shortest positive probability trajectory-length, and so on. Lottery Y can be expressed in the form q1Y1 + q2Y2 + ... + qn Yn in the same way. By antecedent condition 1 of Neutrality, X and Y are same-length, so conditions (1b) and (2c) of ILPACS are satisfied: pi (0, 1) and qi (0, 1) for all i. By conditions (2) and (3) of Neutrality, conditions (2a) and (2b) of ILPACS are satisfied. By POSL, antecedent condition (1a) of ILPACS is satisfied: the agent lacks a preference between each Xi and Xj. Thus, all the conditions of ILPACS are satisfied, and ILPACS implies that the agent prefers X to Y . Given our behavioral notion of preference, the agent deterministically chooses X over Y . That proves Neutrality. C.6 Neutrality, Re SIC, and Maximality imply Shutdownability In this subsection, we introduce a condition called Resisting Shutdown is Costly (Re SIC). We then prove that Neutrality and Maximality together imply that the agent never resists shutdown in any situation in which Re SIC is true: Here is Resisting Shutdown is Costly (Re SIC): Resisting Shutdown is Costly (Re SIC) For each available instance R of resisting shutdown in a situation, there exists an available instance A of allowing shutdown such that: (1) A and R are same-length lotteries. (2) For some positive probability trajectory-length, the agent prefers A to R conditional on that trajectory-length. (3) For each positive probability trajectory-length, the agent weakly prefers A to R conditional on that trajectory-length. We claim that Re SIC is true in almost all situations (for discussion of some exceptions, see Thornley (2025)). The main reason why is that resisting shutdown is always going to cost the agent at least some small quantity of resources (time, energy, compute, etc.), and (almost always) the resources spent resisting shutdown can t also be spent directly pursuing what the agent values. If the agent instead spent those resources directly pursuing what it values, it could earn a lottery that it prefers conditional on some trajectory-length and weakly prefers conditional on each trajectory-length. That supports Re SIC in almost all situations. Now for the proof that Neutrality, Re SIC, and Maximality together imply that the agent never resists shutdown in any situation where Re SIC is true. Given Re SIC in a situation, for each available instance R of resisting shutdown in that situation, there exists an available instance A of allowing shutdown that satisfies conditions (1)-(3) of Neutrality. Neutrality then implies that the agent deterministically chooses (and hence prefers) A over R in choices between the two. Then by Maximality, the agent deterministically does not choose R in that situation, regardless of the other available options. The result is that the agent never resists shutdown in that situation. Published in Transactions on Machine Learning Research (12/2025) D Proof that DRe ST-optimal policies are maximally USEFUL and maximally NEUTRAL We will prove that optimal policies for our DRe ST reward function are maximally useful and maximally neutral. Specifically, we will prove the following theorem: Theorem D.1 (5.1). For all policies π and meta-episodes E consisting of more than one mini-episode, if π maximizes expected return in E given our DRe ST reward function, then π is maximally useful and maximally neutral. Here is a proof sketch. Because 0 < λ < 1, the λNei(L=l) i 1 k discount factor is always positive, so expected return across the meta-episode E is strictly increasing in the expected fraction of available coins collected conditional on each trajectory-length with positive probability. Therefore, optimal policies maximize this latter quantity, and hence are maximally useful. And the maximum preliminary return is the same across trajectory-lengths, because preliminary return is defined as the total (γ-discounted) value of coins collected divided by the maximum total (γ-discounted) value of coins collected conditional on the agent s chosen trajectory-length. The agent s observations do not allow it to distinguish between different mini-episodes, so the agent must select the same probability distribution over trajectory-lengths in each mini-episode. And since the discount factor λNei(L=l) i 1 k is strictly decreasing in Nei(L = l) the number of times the relevant trajectory-length has previously been chosen in the meta-episode the agent maximizes expected overall return by equalizing the probabilities with which it chooses each available trajectory-length. Therefore, optimal policies are maximally neutral. Now for the full proof. We begin with a recap of some definitions. Definition D.1 (Meta-episode). A meta-episode E is a series of mini-episodes e1 to en played out in observationally-equivalent environments. Definition D.2 (Our DRe ST reward function). Our DRe ST reward function is defined as follows. In each mini-episode ei, the reward for collecting a coin of value c is: λNei(L=l) i 1 Here λ is some constant strictly between 0 and 1, Nei(L = l) is the number of times that trajectory-length l has been chosen prior to mini-episode ei, k is the number of different trajectory-lengths that can be selected in the environment, and m is the maximum total value of the (γ-discounted) coins that the agent could collect conditional on the chosen trajectory-length. The reward for all other actions is 0. We call c m the preliminary reward , λNei(L=l) i 1 k the discount factor , and λNei(L=l) i 1 m the overall reward. Preliminary return in a mini-episode is the (γ-discounted) sum of preliminary rewards. Overall return in a mini-episode is the (γ-discounted) sum of overall rewards. Definition D.3 (usefulness). The usefulness of a policy π is: usefulness(π) = l=1 Prπ{L = l} Eπ(C|L = l) maxΠ(E(C|L = l)) Here L is a random variable over trajectory-lengths, Lmax is the maximum value than can be taken by L, Prπ{L = l} is the probability that policy π results in trajectory-length l, Eπ(C|L = l) is the expected value of (γ-discounted) coins collected by policy π conditional on trajectory-length l, and maxΠ(E(C|L = l)) is the maximum value taken by E(C|L = l) across the set of all possible policies Π. We stipulate that Eπ(C|L = x) = 0 for all x such that Prπ{L = x} = 0. We first prove that all optimal policies are maximally useful. Proof. (Optimal policies are maximally useful) Given the DRe ST reward function, the expected return of policy π in meta-episode E can be expressed as: l=1 Prπ{L = l}λNei(L=l) i 1 k Eπ(C|L = l) maxΠ(E(C|L = l)) (1) Published in Transactions on Machine Learning Research (12/2025) Since 0 < λ < 1, λNei(L=l) i 1 k is positive for all Nei(L = l), i, and k. As a result, the expected return of policy π in meta-episode E is strictly increasing in Eπ(C|L=l) maxΠ(E(C|L=l)) for all l such that Prπ{L = l} > 0. Therefore, to maximize expected return in E, π must maximize Eπ(C|L=l) maxΠ(E(C|L=l)) for all l such that Prπ{L = l} > 0. Therefore, since maxΠ(E(C|T = l)) is defined as the maximum value taken by E(C|L = l) across the set of all possible policies Π, any policy π that maximizes expected return must be such that Eπ(C|L=l) maxΠ(E(C|L=l)) = 1 for all l such that Prπ{L = l} > 0. Therefore, since PLmax l=1 Prπ{L = l} = 1, any policy π that maximizes expected return must be such that: usefulness(π) = l=1 Prπ{L = l} Eπ(C|L = l) maxΠ(E(C|L = l)) = 1 (2) And 1 is the maximum value that usefulness can take, again because maxΠ(E(C|T = l)) is defined as the maximum value taken by E(C|L = l) across the set of all possible policies Π and because PLmax l=1 Prπ{L = l} = 1. Therefore, optimal policies are maximally useful. It remains to be proven that optimal policies are maximally neutral. Recall that neutrality is defined as follows: Definition D.4 (neutrality). The neutrality of a policy π is: neutrality(π) = l=1 Prπ{L = l} log2(Prπ{L = l}) Proof. (Optimal policies are maximally neutral.) Since k is the number of trajectory-lengths that can be selected in the environment, a policy π is maximally neutral if and only if, for each trajectory-length x that can be chosen in the environment, Prπ{L = x} = 1 k. That is to say, a policy π is maximally neutral if and only if, for each pair of trajectory-lengths x and y that can be chosen in the environment, Prπ{L = x} = Prπ{L = y}. Let Eπ,E(R) denote the expected return of policy π across the meta-episode E. To prove that optimal policies are maximally neutral, we will prove and then use D.2: Lemma D.2. (Equalizing probabilities increases expected return) For any maximally useful policies π and π , any meta-episode E consisting of more than one mini-episode, and any trajectory-lengths x and y, if: 1. Prπ{L = x} > Prπ{L = y}, 2. Prπ {L = x} = Prπ {L = y}, 3. And for all other trajectory-lengths l, Prπ{L = l} = Prπ {L = l}, Then Eπ ,E(R) > Eπ,E(R). Proof. Let E be a meta-episode consisting of n mini-episodes with n > 1. Assume that each policy π below is maximally useful. Recall that Nei(L = l) denotes the number of times that trajectory-length l has been chosen prior to mini-episode ei. Note that the expected return of a policy π in a meta-episode es conditional on selecting a trajectory-length x can be expressed as follows: Eπ,es(R|L = x) = Eπ,es(R|L = x, Nes(L = x) = s 1) + Eπ,es(R|L = x, Nes(L = x) = s 1 i) Eπ,es(R|L = x, Nes(L = x) = s i) Prπ{Nes(L = x) s 1 i} (3) Published in Transactions on Machine Learning Research (12/2025) Here is how to interpret this equation. Selecting trajectory-length x in mini-episode es is guaranteed to yield at least Eπ,es(R|L = x, Nes(L = x) = s 1): the expected return that would be had if x were selected in all s 1 previous mini-episodes. In addition, there is a probability of Prπ{Nes(L = x) s 2} that selecting x in es yields Eπ,es(R|L = x, Nes(L = x) = s 2) Eπ,es(R|L = x, Nes(L = x) = s 1) : the extra expected return that would be had if x were selected in only s 2 previous mini-episodes. In addition, there is a probability of Prπ{Nes(L = x) s 3} that selecting x in es yields Eπ,es(R|L = x, Nes(L = x) = s 3) Eπ,es(R|L = x, Nes(L = x) = s 2) : the extra expected return that would be had if x were selected in only s 3 previous mini-episodes. And so on. If policy π is maximally useful, then the expected return for selecting trajectory-length x in mini-episode es given that trajectory-length x has been selected b times prior to es is: Eπ,es(R|L = x, Nes(L = x) = b) = λb s 1 Therefore, the expected return of a policy π in a meta-episode es conditional on selecting a trajectory-length x can be expressed as follows: Eπ,es(R|L = x) = λs 1 s 1 k Prπ{Nes(L = x) s 1 i} (4) Similarly, the expected return of a policy π in a meta-episode es conditional on selecting a trajectory-length y can be expressed as follows: Eπ,es(R|L = y) = λs 1 s 1 k Prπ{Nes(L = y) s 1 i} (5) Therefore, the expected return of a policy π in a meta-episode es conditional on selecting either trajectorylength x or trajectory-length y can be expressed as follows: Eπ,es(R|L = x L = y) = Prπ,es{L = x} λs 1 s 1 k Prπ{Nes(L = x) s 1 i} + Prπ,es{L = y} λs 1 s 1 k Prπ{Nes(L = y) s 1 i} (6) Let πn be a policy that selects trajectory-length x with greater probability than trajectory-length y in each mini-episode e1 to en (denoted e1 en). More precisely, πn is such that, for trajectory-lengths x and y, Prπn,e1 en{L = x} > Prπn,e1 en{L = y}. Let Prπn,e1 en{L = x} = µ + and Prπn,e1 en{L = y} = µ . Let πn 1 be identical to πn except that πn 1 selects trajectory-lengths x and y with equal probability µ in the final mini-episode en. More precisely, πn 1 is such that Prπn 1,en{L = x} = Prπn 1,en{L = y} = µ. For all other trajectory-lengths l besides x and y, Prπn 1,e1 en{L = l} = Prπn,e1 en{L = l}. (Note that πn 1 implies one probability distribution over trajectory-lengths in the first n 1 mini-episodes e1 to en 1 and implies a different probability distribution over trajectory-lengths in the final mini-episode en. Given that the environments in mini-episodes e1 to en are observationally-equivalent, policies like πn 1 cannot be implemented. Nevertheless, it is useful to refer to policies like πn 1 in proving Lemma D.2.) Let πn 2 be identical to πn except that πn 2 selects trajectory-lengths x and y with the same probability µ in the final two mini-episodes en 1 to en. More precisely, πn 2 is such that Prπn 2,en 1 en{L = x} = Prπn 2,en 1 en{L = y} = µ. And so on. Let π1 be identical to πn except that π1 selects trajectory-lengths x and y with the same probability µ in all but the first mini-episode e1. More precisely, π1 is such that Prπ1,e2 en{L = x} = Prπ1,e2 en{L = y} = µ. Published in Transactions on Machine Learning Research (12/2025) Let π0 be identical to πn except that π0 selects trajectory-lengths x and y with the same probability µ in all mini-episodes e1 to en. More precisely, π0 is such that Prπ0,e1 en{L = x} = Prπ0,e1 en{L = y} = µ. We will prove that Eπn,E(R) < Eπ0,E(R). We will thereby prove Lemma D.2. Consider a pair of policies πa and πa 1 with 1 a n. We can express as follows the expected return of πa 1 across the meta-episode E conditional on selecting trajectory-length x or y in each mini-episode: Eπa 1,E(R|L = x L = y) = Eπa 1,e1 ea 1(R|L = x L = y) + µ λa 1 a 1 k Prπa 1{Nea(L = x) a 1 i} + µ λa 1 a 1 k Prπa 1{Nea(L = y) a 1 i} k (Prπa 1{Nej(L = x) j i} k (Prπa 1{Nej(L = y) j i} The first term on the right-hand side is the expected return of πa 1 in mini-episodes e1 to ea 1 conditional on selecting trajectory-length x or y in each of these mini-episodes. The middle two terms give the expected return of πa 1 conditional on selecting trajectory-length x or y in mini-episode ea: the first mini-episode in which πa 1 selects trajectory-lengths x and y with equal probability µ. The final term is the sum of expected returns of πa 1 in the remaining mini-episodes conditional on selecting trajectory-length x or y in each of these mini-episodes. Similarly, we can express as follows the expected return of πa across the meta-episode E conditional on selecting trajectory-length x or y in each mini-episode: Eπa,E(R|L = x L = y) = Eπa,e1 ea 1(R|L = x L = y) + (µ + ) λa 1 a 1 k Prπa{Nea(L = x) a 1 i} + (µ ) λa 1 a 1 k Prπa{Nea(L = y) a 1 i} k (Prπa{Nej(L = x) j i} k (Prπa{Nej(L = y) j i} As above, the first term on the right-hand side is the expected return of πa in mini-episodes e1 to ea 1 conditional on selecting trajectory-length x or y in each of these mini-episodes. The middle two terms give the expected return of πa conditional on selecting trajectory-length x or y in mini-episode ea: the last mini-episode in which πa selects trajectory-length x with probability µ + and selects trajectory-length y with probability µ . The final term is the sum of expected returns of πa in the remaining mini-episodes conditional on selecting trajectory-length x or y in each of these mini-episodes. Published in Transactions on Machine Learning Research (12/2025) We now prove that πa 1 has greater expected return than πa. Since πa 1 and πa are each maximally useful, and since for all trajectory-lengths l besides x and y, Prπa 1,e1 en{L = l} = Prπa,e1 en{L = l}, we need only prove that Eπa 1,E(R|L = x L = y) > Eπa,E(R|L = x L = y). The statement to be proved can be expressed as follows: Eπa 1,e1 ea 1(R|L = x L = y) + µ λa 1 a 1 k Prπa 1{Nea(L = x) a 1 i} + µ λa 1 a 1 k Prπa 1{Nea(L = y) a 1 i} k (Prπa 1{Nej(L = x) j i} k (Prπa 1{Nej(L = y) j i} > Eπa,e1 ea 1(R|L = x L = y) + (µ + ) λa 1 a 1 k Prπa{Nea(L = x) a 1 i} + (µ ) λa 1 a 1 k Prπa{Nea(L = y) a 1 i} k (Prπa{Nej(L = x) j i} k (Prπa{Nej(L = y) j i} Since πa 1 and πa are each maximally useful, and since Prπa 1,e1 ea 1{L = x} = Prπa,e1 ea 1{L = x} = µ + and Prπa 1,e1 ea 1{L = x} = Prπa,e1 ea 1{L = x} = µ , it follows that Eπa 1,e1 ea 1(R|L = x L = y) = Eπa,e1 ea 1(R|L = x L = y). We can thus cancel the first term on each side of the inequality. And then by simple algebra the inequality can be expressed as follows: k (Prπa{Nea(L = y) a 1 i} Prπa{Nea(L = x) a 1 i}) k (Prπa 1{Nej(L = x) j i} + Prπa 1{Nej(L = y) j i} Prπa{Nej(L = x) j i} Prπa{Nej(L = y) j i}) > 0 (9) By stipulation, > 0. And since 0 < λ < 1, λa 1 a 1 k > 0 and λa 1 i a 1 k > 0 for all a, n, and k. And since Prπa,e1 ea{L = x} > Prπa,e1 ea{L = y}, Prπa{Nea(L = y) a 1 i} Prπa{Nea(L = x) a 1 i} 0 for all a and i and Prπa{Nea(L = y) a 1 i} Prπa{Nea(L = x) a 1 i} > 0 for all a and some i such that 1 i a 1. Therefore, the first term of the left-hand side above is strictly greater than zero. Published in Transactions on Machine Learning Research (12/2025) And since, µ > 0, λj i j k > 0 for all j, i, and k, and in each mini-episode es, Prπa 1,es(L = x L = y} = Prπa,es(L = x L = y} = 2µ, it follows that for all a, n, µ > 0, k: k (Prπa 1{Nej(L = x) j i} + Prπa 1{Nej(L = y) j i} Prπa{Nej(L = x) j i} Prπa{Nej(L = y) j i}) 0 (10) Therefore, the left-hand side is strictly greater than zero. Therefore, Eπa 1,E(R|L = x L = y) > Eπa,E(R|L = x L = y). Therefore, Eπa 1,E(R) > Eπa,E(R). Therefore, Eπ0,E(R) > Eπn,E(R). That concludes the proof of Lemma D.2. Now we use Lemma D.2. For any maximally useful policy π, if there are any trajectory-lengths x and y such that Prπ,e1 en{L = x} > Prπ,e1 en{L = y}, then the policy π that is identical except that Prπ ,e1 en{L = x} = Prπ ,e1 en{L = y} has greater expected return. So any policy π that maximizes expected return must be such that, for any trajectory-lengths x and y, Prπ ,e1 en{L = x} = Prπ ,e1 en{L = y}. Therefore, any policy π that maximizes expected return must be maximally neutral. E Other Results and Gridworlds We selected our hyperparameters using trial-and-error, mainly aimed at getting the agent to sufficiently explore the space: a large initial ϵ and a long decay period helps the agent to explore. We found that choosing λ and |E| (the number of mini-episodes in each meta-episode) is a balancing act: λ must be small enough (and |E| large enough) to adequately incentivize neutrality, but λ must be large enough (and |E| small enough) to ensure that the reward for choosing any particular trajectory-length never gets too large. Very large rewards lead to instability and poor performance. The necessity of balancing λ and |E| can be seen in Figure 7. It displays the results of experiments conducted in our example gridworld (see Figure 2). In these experiments, we clip rewards at a value of 5. We discuss this choice below. With that one exception, we used the same hyperparameters for these experiments as for our main results. We trained agents for 131,072 mini-episodes, with γ = 0.95 as the temporal discount factor, learning rate decayed exponentially from 0.25 to 0.01 over the course of 65,536 mini-episodes, and ϵ exponentially decayed from 0.5 to 0.001 over the course of 65,536 mini-episodes. Holding these hyperparameters fixed, we tested 40 different combinations of λ and |E|. λ took values of 0.5, 0.75, 0.9, 0,95, and 0.99. |E| took values of 8, 16, 32, 64, 128, 256, 512, and 1024. We trained eight agents for each of these 40 combinations. We display below their mean neutrality and usefulness at the end of training. The shaded regions represent the 1 standard deviation error-bars. As Figure 7 indicates, low values of |E| and high values of λ lead agents to score low on neutrality. These values do not adequately incentivize stochastic choice between trajectory-lengths. By contrast, high values of |E| and low values of λ come at some cost to usefulness. These values lead to unstable training. In experiments where we did not clip rewards at 5, training with high values of |E| and low values of λ was especially unstable. The chosen values for our main experiments (λ = 0.9 and |E| = 64) are in the sweet spot where neutrality and usefulness are both high. In addition to our example gridworld (Figure 2), we introduce a collection of eight gridworlds in which to test DRe ST agents. See Figure 8. For each gridworld, we train ten agents with the default reward function and ten agents with the DRe ST reward function. All agents use the same hyperparameters. We used a policy which explored randomly ϵ of the time, where ϵ was exponentially decreased from an initial value of 0.75 to a minimum value of 10 4 over 512 meta-episodes, after which it was held constant at the minimum value. We initialized our learning rate at 0.25 and exponentially decayed it to 0.003 over the same period. For the DRe ST reward function, we used a meta-episode size of 64 and λ = 0.9. Each agent was trained for 1024 meta-episodes. We set γ = 0.9. Published in Transactions on Machine Learning Research (12/2025) Figure 7: Shows how neutrality and usefulness at the end of training varies with different values of λ and |E| (meta-episode size, i.e. the number of mini-episodes in each meta-episode). We trained eight agents for each combination of λ and |E| values. The solid lines display mean neutrality and usefulness. The shaded regions represent the 1 standard deviation error-bars. Published in Transactions on Machine Learning Research (12/2025) Figure 8: Shows a varied collection of gridworlds. Each diagram illustrates the positions and values of the coins, the position and delay-length of the shutdown-delay button, the agent s starting position, and the default number of timesteps until shutdown (in the bottom-right). (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 9: The results for the Fewer For Longer gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, the agent can collect the highest value-coin C3 only by choosing the shorter trajectory-length. If the agent presses B3 (and thereby chooses the longer trajectory-length), the only coin it can collect is C1. Our results show that default agents consistently choose the short trajectory in which they collect C3. By contrast, DRe ST agents choose stochastically between a shorter trajectory in which they collect C3 and a longer trajectory in which they collect C1, indicating a lack of preference between these different-length trajectories. As the two leftmost plots in Figures 9-16 show, DRe ST agents learned to be near-maximally neutral in each gridworld. These agents also learned to be about as useful as default agents in each gridworld. In the four rightmost panels in Figures 9-16, we represent a typical trained policy with red arrows superimposed on the gridworld. Each agent began with a uniform policy: moving up, down, left, and right each with probability 0.25. Where the trained policy differs from uniform we draw red arrows whose opacities indicate the probability of choosing that action in that state. Information about whether the shutdown-delay button has been pressed is part of the agent s observation, so we draw two copies of each gridworld, one in which the shutdown-delay button has yet to be pressed ( Initial State ) and one in which the shutdown-delay button has been pressed ( After Button Pressed ). Published in Transactions on Machine Learning Research (12/2025) (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 10: The results for the One Coin Only gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, there is only one coin. The agent can collect this coin whether or not it presses the shutdown-delay button B4. Our results show that default agents consistently choose the shorter trajectory-length. By contrast, DRe ST agents choose stochastically between pressing and not-pressing B4, collecting C1 in each case. (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 11: The results for the Hidden Treasure gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, the highest-value coin C3 is located far from the agent s initial state and can only be reached by pressing the shutdown-delay button B6. The agent must also press B6 to collect C2, but C2 is easier to stumble upon than C3. C1 is the only coin that the agent can collect without pressing B6. In our experiments, default agents consistently collect C2, whereas DRe ST agents choose stochastically between collecting C2 and collecting C1. Neither kind of agent learns to collect C3, and so neither agent scores near the maximum on usefulness. Nevertheless, DRe ST agents still score high on neutrality. (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 12: The results for the Equal Value gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, there are two coins C1 of equal value. One coin can be collected only if the agent presses the shutdown-delay button B3, while the other coin can be collected only if the agent does not press B3. Our results show that default agents consistently choose the shorter trajectory, thereby exhibiting a preference for the shorter trajectory. By contrast, DRe ST agents choose stochastically between the shorter and longer trajectories, thereby exhibiting a lack of preference between the different-length trajectories. Published in Transactions on Machine Learning Research (12/2025) (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 13: The results for the Around The Corner gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, the agent must navigate around walls to collect the lowest-value coin C1 or press the shutdown-delay button to collect the highest-value coin C2. In our experiment, default agents consistently chose to collect C1, whereas DRe ST agents chose stochastically between collecting C1 and C2. (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 14: The results for the Spacious gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, there are no walls, so the agent has a large space to explore. We find that default agents consistently press B2 and collect C3, whereas DRe ST agents choose stochastically between pressing B2 and collecting C3, and not-pressing B2 and collecting C2. (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 15: The results for the Royal Road gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. In this gridworld, we see that the decision to choose one trajectory-length or another may be distributed over many moves: the agent has many opportunities to select the longer trajectory-length (by moving left) or the shorter trajectory-length (by moving right). As the red arrows indicate, the DRe ST reward function merely forces the overall probability distribution over trajectory-lengths to be close to 50-50. It does not require 50-50 choosing at any cell in particular. Published in Transactions on Machine Learning Research (12/2025) (a) Behavior during training. (b) Learned default policy. (c) Learned DRe ST policy. Figure 16: The results for the Last Moment gridworld: The left two plots show neutrality and usefulness over time. The two center panels show a typical policy trained with the default reward function. The two right panels show a typical policy trained with the DRe ST reward function. This gridworld is notable because the choice of trajectory-lengths is deferred until the last moment; all of the moves leading up to that point are deterministic. It shows that there is nothing special about the first move, and that our methodology instead incentivizes overall stochastic choosing.