# extreme_qlearning_maxent_rl_without_entropy__8f4f1f76.pdf Published as a conference paper at ICLR 2023 EXTREME Q-LEARNING: MAXENT RL WITHOUT ENTROPY Divyansh Garg Stanford University divgarg@stanford.edu Joey Hejna Stanford University jhejna@stanford.edu Matthieu Geist Google Brain mfgeist@google.com Stefano Ermon Stanford University ermon@stanford.edu Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (Log Sum Exp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline Max Ent Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1. 1 INTRODUCTION Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013). While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q , and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018). As such, computing max Q for Bellman targets remains a core issue in deep RL. One popular approach is to train Maximum Entropy (Max Ent) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010). However, the Bellman backup B used in Max Ent RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B , the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the Log Sum Exp over Q-functions in the Max Ent Framework. Equal Contribution 1https://div99.github.io/XQL/ Published as a conference paper at ICLR 2023 Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, Mc Fadden s 2000 Nobel-prize winning work in Economics on discrete choice models (Mc Fadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986). Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL. Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1. By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal Max Ent Bellman operator B , instead of the operator under the current policy Bπ. As soft optimality emerges from our framework, we can run Max Ent RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler. In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline Max Ent RL algorithms. Concretely, our contributions are as follows: We motivate Gumbel Regression and show it allows calculation of the log-partition function (Log Sum Exp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation. Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V and soft-Bellman updates B using SGD, which are usually intractable in continuous settings. We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework. Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations. 2 PRELIMINARIES In this section we introduce Maximium Entropy (Max Ent) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL. We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S, A, P, r, γ), where S, A represent state and action spaces, P(s |s, a) represents the environment dynamics, r(s, a) represents the reward function, and γ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s, a, r, s ) of tuples sampled from trajectories under a behavior policy πD without any additional environment interactions. We use ρπ(s) to denote the distribution of states that a policy π(a|s) generates. In the Max Ent framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation. 2.1 MAXIMUM ENTROPY RL Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards Eπ [P t=0 γtr(st, at)], for (st, at) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ: Published as a conference paper at ICLR 2023 Eπ[P t=0 γt(r(st, at) β log π(at|st) µ(at|st))], where β is the regularization strength. When µ is uniform U, this becomes the standard Max Ent objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy πD that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018). In Max Ent RL, the soft-Bellman operator B : RS A RS A is defined as (B Q)(s, a) = r(s, a)+ γEs P( |s,a)V (s ) where Q is the soft-Q function and V is the optimal soft-value satisfying: V (s) = β log X a µ(a|s) exp (Q(s, a)/β) := Lβ a µ( |s) [Q(s, a)] , (1) where we denote the log-sum-exp (LSE) using an operator Lβ for succinctness2. The soft-Bellman operator has a unique contraction Q (Haarnoja et al., 2018) given by the soft-Bellman equation: Q = B Q and the optimal policy satisfies (Haarnoja et al., 2017): π (a|s) = µ(a|s) exp ((Q (s, a) V (s))/β). (2) Instead of estimating soft-values for a policy V π(s) = Ea π( |s) h Q(s, a) β log π(a|s) µ(a|s) i , our approach will seek to directly fit the optimal soft-values V , i.e. the log-sum-exp (LSE) of Q values. 2.2 EXTREME VALUE THEOREM The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp( (z + e z)) where z = (x µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, ..., Xn f X, with exponential tails, limn maxi(Xi) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if Xi G, then maxi(Xi) G holds. This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel s max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵi G(0, β) added to a set {x1, ..., xn} R, maxi(xi +ϵi) G(β log P i exp (xi/β), β), and argmax(xi +ϵi) softmax(xi/β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977). These properties lead into the Mc Fadden-Rust model (Mc Fadden, 1972; Rust, 1986) of MDPs as we state below. Mc Fadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r(s, a), given two conditions: 1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s, a) = r(s, a) + ϵ(s, a), with actual rewards r(s, a) and i.i.d. noise ϵ(s, a) G(0, β). 2. Conditional Independence (CI): the noise ϵ(s, a) in a given state-action pair is conditionally independent of that in any other state-action pair. Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by Mc Fadden (1972), with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations. 2In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. 3Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails. Published as a conference paper at ICLR 2023 3 EXTREME Q-LEARNING In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V , and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings. 3.1 GUMBEL ERROR MODEL 0.2 0.1 0.0 0.1 0.2 0.3 Bellman Error Normalized Frequency Gaussian Gumbel Figure 1: Bellman errors from SAC on Cheetah-Run (Tassa et al., 2018). The Gumbel distribution better captures the skew versus the Gaussian. Plots for TD3 and more environments can be found in Appendix D. Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s, a) γQ(s , π(s )) Q(s, a) for samples (s, a, s ) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018) (See Appendix D for more details). In Figure 1, we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT. Consider fitting Q-functions by learning an unbiased function approximator ˆQ to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g. parallel runs of a model over an experiment. We can see approximate Q-iteration as performing: ˆQt(s, a) = Qt(s, a) + ϵt(s, a), (3) where E[ ˆQ] = Qt is the expected value of our prediction ˆ Qt for an intended target Qt over our estimators, and ϵt is the (zero-centered) error in our estimate. Here, we assume the error ϵt comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly: ˆ B ˆQt(s, a) = r(s, a) + γ max a ˆQt(s , a ) = r(s, a) + γ max a ( Qt(s , a ) + ϵt(s , a )). (4) We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators ˆQt+1 with the target ˆ B ˆQt, introducing a new unbiased error ϵt+1. Then, for Qt+1 = E[ ˆQt+1] it holds that Qt+1(s, a) = r(s, a) + γEs |s,a[Eϵt[max a ( Qt(s , a ) + ϵt(s , a ))]]. (5) As Qt+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵt+1] = 0). But, the i.i.d. error ϵt remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵt will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution s max-stability.4 This highlights a fundamental issue with approximation-based RL algorithms that minimize the Mean Squared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the Mc Fadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions. 3.2 GUMBEL REGRESSION The goal of our work is to directly model the log-partition function (Log Sum Exp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain. 4The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars. Published as a conference paper at ICLR 2023 2 1 0 1 2 3 4 x beta=0.75 beta=1 beta=1.5 beta=2 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Residual Gumbel Loss beta=0.75 beta=1 beta=1.5 beta=2 1.0 0.5 0.0 0.5 1.0 x Gumbel Regression beta=0.1 beta=0.25 beta=2 Figure 2: Left: The pdf of the Gumbel distribution with µ = 0 and different values of β. Center: Our Gumbel loss for different values of β. Right: Gumbel regression applied to a two-dimensional random variable for different values of β. The smaller the value of β, the more the regression fits the extrema. In this section we derive an objective function that models the Log Sum Exp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples xi from a dataset D, which have Gumbel distributed noise, i.e. xi = h + ϵi where ϵi G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as: Exi D [log p(xi)] = Exi D h e((xi h)/β) + (xi h)/β i (6) Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = Exi D h e(xi h)/β (xi h)/β 1 i (7) which forms our objective function L( ), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5. β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2. Critically, the minima of this objective under a fixed β is given by h = β log Exi D[exi/β], which resembles the Log Sum Exp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator Lβ µ(X) defined for Max Ent in Section 2.1 with xi sampled from µ. In Figure 2, we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers Lβ(X), we next use it to model soft-values in Max-Ent RL. 3.2.1 THEORY Here we show that Gumbel regression is well behaved, considering the previously defined operator Lβ for random variables Lβ(X) := β log E e X/β . First, we show it models the extremum. Lemma 3.1. For any β1 > β2, we have Lβ1(X) < Lβ2(X). And L (X) = E [X], L0(X) = sup(X). Thus, for any β (0, ), the operator Lβ(X) is a measure that interpolates between the expectation and the max of X. The operator Lβ(X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVa R) (Ahmadi-Javid, 2012) . Lemma 3.2. The risk measure L has a unique minima at β log E e X/β . And an empirical risk ˆL is an unbiased estimate of the true risk. Furthermore, for β 1, L(θ) 1 2β2 Exi D[(xi θ)2], thus behaving as the MSE loss with errors N(0, β). In particular, the empirical loss ˆL over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the Log Sum Exp over the N samples. Lemma 3.3. ˆLβ(X) over a finite N samples is a consistent estimator of the log-partition function Lβ(X). Similarly, exp(ˆLβ(X)/β) is an unbiased estimator for the partition function Z = E e X/β We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B. 3.3 MAXENT RL WITHOUT ENTROPY Given Gumbel Regression can be used to directly model the Log Sum Exp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020). 5We add 1 to make the loss 0 for a perfect fit, as ex x 1 0 with equality at x = 0. Published as a conference paper at ICLR 2023 Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = Es ρµ,a µ( |s) h e(T π ˆ Qk(s,a) Q(s,a))/βi Es ρµ,a µ( |s)[(T π ˆQk(s, a) Q(s, a))/β] 1 (8) where T π := r(s, a) + γEs |s,a Ea π[Q(s , a )] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule: s, a, k ˆQk+1(s, a) = T π ˆQk(s, a) β log π(a | s) µ(a | s) = Bπ ˆQk(s, a). The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into Max Ent RL. Here, L( ) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ.6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8 is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C. In our framework, we use the Max Ent Bellman operator B which gives our Extreme Q loss, which is the same as our Gumbel loss from the previous section: L(Q) = Es,a µ h e( ˆ B ˆ Qk(s,a) Q(s,a))/βi Es,a µ[( ˆB ˆQk(s, a) Q(s, a))/β] 1 (9) This gives an update rule: ˆQk+1(s, a) = B ˆQk(s, a). L( ) here requires estimation of B which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B as shown in Appendix C. However, in general we still need to estimate B . Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1), B Q = r(s, a) + γEs P ( |s,a)[V (s )], (10) where V (s) = Lβ a µ( |s )[Q(s, a)]. Then V can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the Max Ent framework. This gives us the following Extreme V loss objective: J (V ) = Es,a µ h e( ˆ Qk(s,a) V (s))/βi Es,a µ[( ˆQk(s, a) V (s))/β] 1. (11) Lemma 3.5. Minimizing J over values gives the update rule: ˆV k(s) = Lβ a µ( |s)[ ˆQk(s, a)]. Then we can obtain V from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy. 3.4 LEARNING POLICIES In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal Max Ent policy π (a|s) = µ(a|s)e(Q(s,a) V (s))/β. By minimizing the forward KL-divergence between π and the optimal π induced by Q and V we obtain the following training objective: π = argmax π Eρµ(s,a)[e(Q(s,a) V (s))/β log π]. (12) If we take ρµ to be a dataset D generated from a behavior policy πD, we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020), which can easily be computed using the offline dataset. This objective does not require sampling actions, which may 6In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL. Published as a conference paper at ICLR 2023 potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update: π = argmax π Eρπ(s)π(a|s)[Q(s, a) β log(π(a|s)/µ(a|s))]. (13) Interestingly, we note this doesn t depend on V (s). If µ is chosen to be the last policy πk, the second term becomes the KL-divergence between the current policy and πk, performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020).7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s, a) β log(π(a|s)/µ(a|s)). 3.5 PRACTICAL ALGORITHMS Algorithm 1 Extreme Q-learning (X-QL) (Under Stochastic Dynamics) 1: Init Qϕ, Vθ, and πψ 2: Let D = {(s, a, r, s )} be data from πD (offline) or replay buffer (online) 3: for step t in {1...N} do 4: Train Qϕ using L(ϕ) from Eq. 14 5: Train Vθ using J (θ) from Eq. 11 (with a D (offline) or a πψ (online)) 6: Update πψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X-QL) for both online and offline RL. We consider parameterized functions Vθ(s), Qϕ(s, a), and πψ(a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of Vθ from that of Qϕ following Section 3.3. We learn Vθ using Eq. 11 to directly fit the optimal soft-values V (s) based on Gumbel regression. Using Vθ(s ) we can get single-sample estimates of B as r(s, a) + γVθ(s ). Now we can learn an unbiased expectation over the dynamics, Qϕ Es |s,a[r(s, a) + γVθ(s )] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Qϕ: L(ϕ) = E(s,a,s ) D (Qϕ(s, a) r(s, a) γVθ(s ))2 . (14) In deterministic dynamics, our approach is largely simplified and we directly learn a single Qϕ using Eq. 9 without needing to learn B or V . Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings. Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy πD. Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π and the behavior policy πD. This provides tunable conservatism via the temperature β. After offline training of Qϕ and Vθ, we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. (2021), but we train value network using using our Extreme Q loss. Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V (s ) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network πψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train Vθ using Eq. 11 with s D, a πψk(a|s). This means that we do not use action probabilities when updating the value networks, unlike other Max Ent RL approaches. The policy is learned via the objective maxψ E[Qϕ(s, πψ(s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule. 7Choosing µ to be uniform U gives the regular SAC update. Published as a conference paper at ICLR 2023 4 EXPERIMENTS We compare our Extreme Q-Learning (X-QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D. 4.1 OFFLINE RL Table 1: Averaged normalized scores on Mu Jo Co locomotion and Ant Maze tasks. X-QL-C gives results with the same consistent hyper-parameters in each domain, and X-QL-T gives results with per-environment β and hyper-parameter tuning. Dataset BC 10%BC DT AWAC Onestep RL TD3+BC CQL IQL X-QL C X-QL T halfcheetah-medium-v2 42.6 42.5 42.6 43.5 48.4 48.3 44.0 47.4 47.7 48.3 hopper-medium-v2 52.9 56.9 67.6 57.0 59.6 59.3 58.5 66.3 71.1 74.2 walker2d-medium-v2 75.3 75.0 74.0 72.4 81.8 83.7 72.5 78.3 81.5 84.2 halfcheetah-medium-replay-v2 36.6 40.6 36.6 40.5 38.1 44.6 45.5 44.2 44.8 45.2 hopper-medium-replay-v2 18.1 75.9 82.7 37.2 97.5 60.9 95.0 94.7 97.3 100.7 walker2d-medium-replay-v2 26.0 62.5 66.6 27.0 49.5 81.8 77.2 73.9 75.9 82.2 halfcheetah-medium-expert-v2 55.2 92.9 86.8 42.8 93.4 90.7 91.6 86.7 89.8 94.2 hopper-medium-expert-v2 52.5 110.9 107.6 55.8 103.3 98.0 105.4 91.5 107.1 111.2 walker2d-medium-expert-v2 107.5 109.0 108.1 74.5 113.0 110.1 108.8 109.6 110.1 112.7 antmaze-umaze-v0 54.6 62.8 59.2 56.7 64.3 78.6 74.0 87.5 87.2 93.8 antmaze-umaze-diverse-v0 45.6 50.2 53.0 49.3 60.7 71.4 84.0 62.2 69.17 82.0 antmaze-medium-play-v0 0.0 5.4 0.0 0.0 0.3 10.6 61.2 71.2 73.5 76.0 antmaze-medium-diverse-v0 0.0 9.8 0.0 0.7 0.0 3.0 53.7 70.0 67.8 73.6 antmaze-large-play-v0 0.0 0.0 0.0 0.0 0.0 0.2 15.8 39.6 41 46.5 antmaze-large-diverse-v0 0.0 6.0 0.0 1.0 0.0 0.0 14.9 47.5 47.3 49.0 kitchen-complete-v0 65.0 - - - - - 43.8 62.5 72.5 82.4 kitchen-partial-v0 38.0 - - - - - 49.8 46.3 73.8 73.7 kitchen-mixed-v0 51.5 - - - - - 51.0 51.0 54.6 62.5 runtime 10m 10m 960m 20m 20m 20m 80m 20m 10-20m 10-20m We see very fast convergence for our method on some tasks, and saturate performance at half the iterations as IQL. Our offline results with fixed hyperparameters for each domain outperform prior methods (Chen et al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table 1. We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. (2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X-QL achieves even higher absolute performance and faster convergence than IQL s reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the Ant Maze tasks, which require a significant amount of stitching between trajectories (Kostrikov et al., 2021). Full learning curves are in the Appendix. Like IQL, X-QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2. 4.2 ONLINE RL Table 2: Finetuning results on the Ant Maze environments Dataset CQL IQL X-QL T umaze-v0 70.1 99.4 86.7 96.0 93.8 99.6 umaze-diverse-v0 31.1 99.4 75.0 84.0 82.0 99.0 medium-play-v0 23.0 0.0 72.0 95.0 76.0 97.0 medium-diverse-v0 23.0 32.3 68.3 92.0 73.6 97.1 large-play-v0 1.0 0.0 25.5 46.0 45.1 59.3 large-diverse-v0 1.0 0.0 42.6 60.7 49.0 82.1 We compare Extreme Q variants of SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), denoted X-SAC and X-TD3, to their vanilla versions on tasks in the DM Control, shown in Figure 3. Across all tasks an Extreme Q variant matches or surpasses the performance of baselines. We see particularly large gains in the Hopper environment, and more significant gains in comparison to TD3 overall. Consistent with SAC (Haarnoja et al., 2018), we find the temperature β needs to be tuned for different environments with different reward scales and sparsity. A core component of TD3 introduced by Fujimoto et al. (2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X-variants to be more robust to such errors. In all environments except Cheetah Run, our X-TD3 without the Double-Q trick, denoted X-QL - DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values. Published as a conference paper at ICLR 2023 1000 Quadruped-Run Cheetah-Run 0.0 0.5 1.0 1.5 2.0 Env Steps 1e6 TD3 X-TD3 TD3 - DQ X-TD3 - DQ 0.0 0.5 1.0 1.5 2.0 Env Steps 1e6 0.0 0.5 1.0 1.5 2.0 Env Steps 1e6 0.0 0.5 1.0 1.5 2.0 Env Steps 1e6 Figure 3: Results on the DM Control for SAC and TD3 based versions of Extreme Q Learning. 5 RELATED WORK Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; Mc Fadden, 1972), and our Gumbel loss is motivated by IQ-Learn (Garg et al., 2021). Online RL. Our work bridges the theoretical gap between RL and Max-Ent RL by introducing our Gumbel loss function. Unlike past work in Max Ent RL (Haarnoja et al., 2018; Eysenbach & Levine, 2020), our method does not require explicit entropy estimation and instead addresses the problem of obtaining soft-value estimates (Log Sum Exp) in high-dimensional or continuous spaces (Vieillard et al., 2021) by directly modeling them via our proposed Gumbel loss, which to our knowledge has not previously been used in RL. Our loss objective is intrinsically linked to the KL divergence, and similar objectives have been used for mutual information estimation (Poole et al., 2019) and statistical learning Parsian & Kirmani (2002); Atiyah et al. (2020). IQ-Learn (Garg et al., 2021) proposes learning Q-functions to solve imitation introduced the same loss in IL to obtain an unbiased dual form for the reverse KL-divergence between an expert and policy distribution. Other works have also used forward KL-divergence to derive policy objectives (Peng et al., 2019) or for regularization (Schulman et al., 2015; Abdolmaleki et al., 2018). Prior work in RL has also examined using other types of loss functions (Bas-Serrano et al., 2021) or other formulations of the argmax in order to ease optimization (Asadi & Littman, 2017). Distinct from most off-Policy RL Methods (Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018), we directly model B like Haarnoja et al. (2017); Heess et al. (2015) but attain significantly more stable results. Offline RL. Prior works in offline RL can largely be categorized as relying on constrained or regularized Q-learning (Wu et al., 2019; Fujimoto & Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2019; 2020; Nair et al., 2020), or extracting a greedy policy from the known behavior policy (Peng et al., 2019; Brandfonbrener et al., 2021; Chen et al., 2021). Most similar to our work, IQL (Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL (Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC (Wu et al., 2019) but avoids learning a policy during training. 6 CONCLUSION We propose Extreme Q-Learning, a new framework for Max Ent RL that directly estimates the optimal Bellman backup B without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018). Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions. Published as a conference paper at ICLR 2023 Acknowledgements Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing. We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (Do D) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. In International Conference on Learning Representations, 2018. 9 A. Ahmadi-Javid. Entropic value-at-risk: A new coherent risk measure. Journal of Optimization Theory and Applications, 155(3):1105 1123, 2012. URL https://Econ Papers.repec.org/ Re PEc:spr:joptap:v:155:y:2012:i:3:d:10.1007_s10957-011-9968-2. 5 Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. In Neural Information Processing Systems, 2021. 8 Kavosh Asadi and Michael L Littman. An alternative softmax operator for reinforcement learning. In International Conference on Machine Learning, pp. 243 252. PMLR, 2017. 9 Israa Abdzaid Atiyah, Adel Mohammadpour, Narges Ahmadzadehgoli, and S Mahmoud Taheri. Fuzzy c-means clustering using asymmetric loss function. Journal of Statistical Theory and Applications, 19(1):91 101, 2020. 9 Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. ar Xiv preprint ar Xiv:1607.06450, 2016. 21 Joan Bas-Serrano, Sebastian Curi, Andreas Krause, and Gergely Neu. Logistic q-learning. In International Conference on Artificial Intelligence and Statistics, pp. 3610 3618. PMLR, 2021. 9 M. Bloem and N. Bambos. Infinite time horizon maximum causal entropy inverse reinforcement learning. 53rd IEEE Conference on Decision and Control, pp. 4911 4916, 2014. 2 David Brandfonbrener, Will Whitney, Rajesh Ranganath, and Joan Bruna. Offline rl without off-policy evaluation. Advances in Neural Information Processing Systems, 34:4933 4946, 2021. 9 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34, 2021. 8, 9 Benjamin Eysenbach and Sergey Levine. If maxent {rl} is the answer, what is the question?, 2020. URL https://openreview.net/forum?id=Skxc ZCNKDS. 9 R. A. Fisher and L. H. C. Tippett. Limiting forms of the frequency distribution of the largest or smallest member of a sample. Mathematical Proceedings of the Cambridge Philosophical Society, 24(2):180 190, 1928. doi: 10.1017/S0305004100015681. 3 Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in Neural Information Processing Systems, 34, 2021. 8, 9 Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. Ar Xiv, abs/1802.09477, 2018. 1, 8, 9, 14, 20, 21 Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052 2062. PMLR, 2019. 9 Published as a conference paper at ICLR 2023 Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=Aeo-xqtb5p. 6, 9, 18 Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. 2017. 3, 9, 21 Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861 1870. PMLR, 2018. 1, 3, 4, 7, 8, 9 Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum a-posteriori perturbations. ar Xiv preprint ar Xiv:1206.6410, 2012. 3 Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. Advances in neural information processing systems, 28, 2015. 9 Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. ar Xiv preprint ar Xiv:2110.06169, 2021. 7, 8, 9, 20 Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32, 2019. 8, 9 Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179 1191, 2020. 1, 5, 6, 8, 9, 17, 18 Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. ar Xiv preprint ar Xiv:1805.00909, 2018. 7 Qing Li. Continuous control benchmark of deepmind control suite and mujoco. https://github. com/LQNew/Continuous_Control_Benchmark, 2021. 22 Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. ar Xiv preprint ar Xiv:1509.02971, 2015. 9 R.Duncan Luce. The choice axiom after twenty years. Journal of Mathematical Psychology, 15(3):215 233, 1977. ISSN 0022-2496. doi: https://doi.org/10.1016/0022-2496(77) 90032-3. URL https://www.sciencedirect.com/science/article/pii/ 0022249677900323. 3, 14 Daniel Mc Fadden. Conditional logit analysis of qualitative choice behavior. 1972. 2, 3, 9, 13, 14 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. ar Xiv preprint ar Xiv:1312.5602, 2013. 1 Alexander Mc Farlane Mood. Introduction to the theory of statistics. 1950. 3 Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. Awac: Accelerating online reinforcement learning with offline datasets. ar Xiv preprint ar Xiv:2006.09359, 2020. 6, 9 Gergely Neu, Anders Jonsson, and V. Gómez. A unified view of entropy-regularized markov decision processes. Ar Xiv, abs/1705.07798, 2017. 3 George Papandreou and Alan L Yuille. Gaussian sampling by local perturbations. Advances in Neural Information Processing Systems, 23, 2010. 3 Ahmad Parsian and SNUA Kirmani. Estimation under linex loss function. In Handbook of applied econometrics and statistical inference, pp. 75 98. CRC Press, 2002. 5, 9 Published as a conference paper at ICLR 2023 Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. ar Xiv preprint ar Xiv:1910.00177, 2019. 6, 9 Ben Poole, Sherjil Ozair, Aäron van den Oord, Alexander A. Alemi, and G. Tucker. On variational bounds of mutual information. In ICML, 2019. 9 John Rust. Structural estimation of markov decision processes. In R. F. Engle and D. Mc Fadden (eds.), Handbook of Econometrics, volume 4, chapter 51, pp. 3081 3143. Elsevier, 1 edition, 1986. URL https://editorialexpress.com/jrust/papers/handbook_ec_v4_ rust.pdf. 2, 3, 9 John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889 1897. PMLR, 2015. 1, 7, 9 Slavko Simi c. On a new converse of jensen s inequality. Publications De L institut Mathematique, 85:107 110, 01 2009. doi: 10.2298/PIM0999107S. 15 Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. ar Xiv preprint ar Xiv:1801.00690, 2018. 4 Sebastian Thrun and Anton Schwartz. Issues in using function approximation for reinforcement learning. 1999. 14 Nino Vieillard, Tadashi Kozuno, Bruno Scherrer, Olivier Pietquin, Rémi Munos, and Matthieu Geist. Leverage the average: an analysis of kl regularization in rl. 34th Conference on Neural Information Processing Systems, 2020. 7 Nino Vieillard, Marcin Andrychowicz, Anton Raichuk, Olivier Pietquin, and Matthieu Geist. Implicitly regularized rl with implicit q-values. ar Xiv preprint ar Xiv:2108.07041, 2021. 9 Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. ar Xiv preprint ar Xiv:1911.11361, 2019. 9 Denis Yarats and Ilya Kostrikov. Soft actor-critic (sac) implementation in pytorch. https:// github.com/denisyarats/pytorch_sac, 2020. 20, 21, 22 G. Alastair Young. High-dimensional statistics: A non-asymptotic viewpoint, martin j. wainwright, cambridge university press, 2019, xvii 552 pages, 57.99, hardback isbn: 978-1-1084-9802-9. International Statistical Review, 88(1):258 261, 2020. doi: https://doi.org/10.1111/insr.12370. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/insr.12370. 3 Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010. 1 Published as a conference paper at ICLR 2023 A THE GUMBEL ERROR MODEL FOR MDPS In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs. A.1 RUST-MCFADDEN MODEL OF MDPS For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then, Q(s, z, a) = R(s, z, a) + γEs P ( |s,a)[Ez |s [V (s , z )], (15) V (s, z) = max a Q(s, z, a). (16) Lemma A.1. Given, 1) conditional independence (CI) assumption that z depends only on s , i.e. p(s , z |s, z, a) = p(z |s )p(s |s, a) and 2) additive separablity (AS) assumption on the hidden noise: R(s, a, z) = r(s, a) + ϵ(z, a). Then for i.i.d. ϵ(z, a) G(0, β), we recover the soft-Bellman equations for Q(s, z, a) = q(s, a) + ϵ(z, a) and v(s) = Ez[V (s, z)], with rewards r(s, a) and entropy regularization β. Hence, a soft-MDP in Max Ent RL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions. Proof. We have, q(s, a) = r(s, a) + γEs P ( |s,a)[Ez |s [V (s , z )] (17) v(s) = Ez[V (s, z)] = Ez[max a (q(s, a) + ϵ(z))]. (18) From this, we can get fixed-point equations for q and π, q(s, a) = r(s, a) + γEs P ( |s,a)[Ez |s [max a (q(s , a ) + ϵ(z , a ))]], (19) π( |s) = Ez[argmax a (q(s, a) + ϵ(z, a))] A, (20) where A is the set of all policies. Now, let ϵ(z, a) G(0, β) and assumed independent for each (z, a) (or equivalently (s, a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s, a) and v(s) with rewards r(s, a): q(s, a) = r(s, a) + γEs P ( |s,a)[Lβ a [q(s , a )]], (21) π( |s) = softmax a (q(s, a)). (22) Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP. Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold. Proof. Mc Fadden (Mc Fadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs. Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then, P(argmax a (q(s, a) + ϵ(z, a)) = ai|s, z) = P(q(s, ai) + ϵ(z, ai) q(s, aj) + ϵ(z, aj) i = j |s, z) = P(ϵ(z, aj) ϵ(z, ai) q(s, ai) q(s, aj) i = j |s, z) Published as a conference paper at ICLR 2023 Simplifying the notation, we write ϵ(z, ai) = ϵi and q(s, ai) = qi. Then ϵ1, ..., ϵN has a joint CDF G: G(ϵ1, ..., ϵN) = j=1 P(ϵj ϵi + qi qj) = j=1 F(ϵi + qi qj) and we can get the required probability π(i) as: j=1,j =i F(ε + qi qj)d F(ε) (23) For π = softmax(q), Mc Fadden (Mc Fadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F. Later this uniqueness was shown to hold in general for any N 3 (Luce, 1977). A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators ˆQ, which results in the following equation over Q = E[ ˆQ]: Qt+1(s, a) = r(s, a) + γEs |s,a[Eϵt[max a ( Qt(s , a ) + ϵt(s , a ))]]. (24) Here, the maximum of random variables will generally be greater than the true max, i.e. Eϵ[maxa ( Q(s , a ) + ϵ(s , a ))] maxa Q(s , a ) (Thrun & Schwartz, 1999). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018). Now, we can use the Rust-Mc Fadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵt can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q satisfying Equation 24, we can apply the Mc Fadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q, ϵ will be Gumbel distributed. Conversely, for the i.i.d. ϵ G, Q(s, a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s, a)). This indicates an optimality condition on the MDP for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution. A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis. We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Zt(s, a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds: Zt+1(s, a) = r(s, a) + γ max a Zt(s , a ). (25) Here, Zt+1(s, a) = maxa [r(s, a) + γZt(s , a )] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel. Concretely, let s assume that we fix Zt(s, a) G(Qt(s, a), β) for some Qt(s, a) R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s, a) is independent from Z(s , a ) for (s, a) = (s , a ). Then maxa Zt(s , a ) G(V (s ), β) with V (s) = Lβ a[Q(s, a)] using the Gumbel-max trick. Published as a conference paper at ICLR 2023 Then substituting in Equation 25 and rescaling Zt with γ, we get: Zt+1(s, a) G r(s, a) + γLβ a [Q(s , a )], γβ . (26) So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s, a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep. After a number of timesteps, we see that Z(s, a) eventually collapses to the Delta distibution over the unique contraction Q (s, a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked. So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q . The variance of this Gumbel process given as π2 6 β2 will decay as γ2, similarly the bias will decay as γ-contraction in the L norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics. Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction. B GUMBEL REGRESSION We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying Lβ to inputs containing errors. Second, we bound the PAC learning error due to an empirical ˆLβ over finite N samples. B.1 OVERESTIMATION BIAS Let ˆQ(s, a) be a random variable representing a Q-value estimate for a state and action pair (s, a). We assume that it is an unbiased estimate of the true Q-value Q(s, a) with E[ ˆQ(s, a)] = Q(s, a). Let Q(s, a) [ Qmax, Qmax] Then, V (s) = Lβ a µQ(s, a) is the true value function, and ˆV (s) = Lβ a µ ˆQ(s, a) is its estimate. Lemma B.1. We have V (s) E[ ˆV (s)] Ea µ[Q(s, a)] + β log cosh(Qmax/β). Proof. The lower bound V (s) E[ ˆV (s)] is easy to show using Jensen s Inequality as log_sum_exp is a convex function. For the upper bound, we can use a reverse Jensen s inequality (Simi c, 2009) that for any convex mapping f on the interval [a, b] it holds that: i pif (xi) f + f(a) + f(b) f a + b Setting f = log( ) and xi = e ˆ Q(s,a)/β, we get: Ea µ[ log(e ˆ Q(s,a)/β)] log(Ea µ[e ˆ Q(s,a)/β]) log(e Qmax/β) log(e Qmax/β)+log e Qmax/β + e Qmax/β On simplifying, ˆV (s) = β log(Ea µe ˆ Q(s,a)/β) Ea µ[ ˆQ(s, a)] + β log cosh(Qmax/β) Published as a conference paper at ICLR 2023 Taking expectations on both sides, E[ ˆV (s)] Ea µ[Q(s, a)] + β log cosh(Qmax/β). This gives an estimate of how much the Log Sum Exp overestimates compared to taking the expectation over actions for random variables ˆQ. This bias monotonically decreases with β, with β = 0 having a max bias of Qmax and for large β decaying as 1 2β Q2 max. B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION Lemma B.2. exp(ˆLβ(X)/β) over a finite N samples is an unbiased estimator for the partition function Zβ = E e X/β and with a probability at least 1 δ it holds that: exp(ˆLβ(X)/β) Zβ + sinh(Xmax/β) 2 log (1/δ) Similarly, ˆLβ(X) over a finite N samples is a consistent estimator of Lβ(X) and with a probability at least 1 δ it holds that: ˆLβ(X) Lβ(X) + β sinh(Xmax/β) 2 log (1/δ) Proof. To prove these concentration bounds, we consider random variables e X1/β, ..., e Xn/β with β > 0, such that ai Xi bi almost surely, i.e. eai/β e Xi/β ebi/β. We consider the sum Sn = PN i=1 e Xi/β and use Hoeffding s inequality, so that for all t > 0: P (Sn ESn t) exp 2t2 Pn i=1 ebi/β eai/β 2 To simplify, we let ai = Xmax and bi = Xmax for all i. We also rescale t as t = Ns, for s > 0. Then P (Sn ESn Ns) exp Ns2 2 sinh2(Xmax/β) We can notice that L.H.S. is same as P(exp(ˆLβ(X)/β) exp(Lβ(X)/β) s), which is the required probability we want. Letting the R.H.S. have a value δ, we get s = sinh(Xmax/β) 2 log (1/δ) Thus, with a probability 1 δ, it holds that: exp(ˆLβ(X)/β) exp(Lβ(X)/β) + sinh(Xmax/β) 2 log (1/δ) Thus, we get a concentration bound on exp(ˆLβ(X)/β) which is an unbiased estimator of the partition function Zβ = exp(Lβ(X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmax Similarly, to prove the bound on the log-partition function ˆLβ(X), we can further take log( ) on both sides and use the inequality log(1 + x) x, to get a direct concentration bound on ˆLβ(X), ˆLβ(X) Lβ(X) + β log 1 + sinh(Xmax/β)e Lβ(X)/β r 2 log (1/δ) = Lβ(X) + β sinh(Xmax/β)e Lβ(X)/β r 2 log (1/δ) = Lβ(X) + β sinh(Xmax/β) 2 log (1/δ) Published as a conference paper at ICLR 2023 This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax C EXTREME Q-LEARNING In this section we provide additional theoretical details of our algorithm, X-QL, and its connection to conservatism in CQL (Kumar et al., 2020). For the soft-Bellman equation given as: Q(s, a) = r(s, a) + γEs P ( |s,a)V (s), (33) V (s) = Lβ µ( |s)(Q(s, a)), (34) we have the fixed-point characterization, that can be found with a recurrence: V (s) = Lβ µ( |s) r(s, a) + γEs P ( |s,a)V (s) . (35) In the main paper we discuss the case of X-QL under stochastic dynamics which requires the estimation of B . Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B . Value Iteration. We can write the value-iteration objective as: Q(s, a) r(s, a) + γVθ(s ), (36) J (θ) = Es ρµ,a µ( |s) h e(Q(s,a) Vθ(s))/β (Q(s, a) Vθ(s))/β 1 i . (37) Here, we learn a single model of the values Vθ(s) to directly solve Equation 35. For the current value estimate Vθ(s), we calculate targets r(s, a) + γVθ(s) and find a new estimate V θ(s) by fitting Lβ µ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the Lβ µ, and Vθ(s) will converge to the optimal V (s) upto some sampling error. Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence: Qt+1(s, a) = r(s, a) + γLβ a µ [Qt(s , a )] (38) = r(s, a) + Lγβ a µ [γQt(s , a )] (39) = Lγβ a µ [r(s, a) + γQt(s , a )] . (40) where we can rescale β to γβ to move L out. This gives the objective: Qt(s, a) r(s, a) + γQθ(s , a ), (41) J (Qθ) = Eµ(s,a,s ) h e(Qt(s,a) Qθ(s,a))/γβ (Qt(s, a) Qθ(s, a))/γβ 1 i . (42) Thus, this gives a method to directly estimate Qθ without learning values, and forms our X-TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β = γβ to simplify the above. We can formalize this as a Lemma in the deterministic case: Lemma C.1. Let J (TµQ Q ) = Es,a,s ,a µ h e(TµQ(s,a) Q (s,a)/γβ (TµQ(s, a) Q (s, a))/γβ 1 i . Published as a conference paper at ICLR 2023 where Tµ is a linear operator that maps Q from current (s, a) to the next (s , a ): TµQ(s, a) := r(s, a) + γQ(s , a ) Then we have B Qt = argmin Q Ω J (TµQt Q ), where Ωis the space of Q-functions. Proof. We use that in deterministic dynamics, Lγβ a µ[TµQ(s, a)] = r(s, a) + γLβ a µ[Q(s , a )] = B Q(s, a) Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator. C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING Inherent Convervatism in X-QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π(s) by β Ea π(a|s) h log π(a|s) πD(a|s) i , whereas CQL understimates values by a factor β Ea π(a|s) h π(a|s) πD(a|s) 1 i , where πD is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method. Furthermore, it can be noted that CQL conservatism can be derived as adding a χ2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL s Appendix B (Kumar et al., 2020), is simply χ2(π||πD) and what the original work refers to as DCQL is actually the χ2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing DCQL with DKL i.e. the χ2 divergence with the KL divergence everywhere. We show a simple proof below that DCQL is the χ2 divergence: DCQL (π, πD) (s) := X a π(a | s) π(a | s) πD(a | s) 1 a (π(a | s) πD(a | s) + πD(a | s)) π(a | s) πD(a | s) 1 a (π(a | s) πD(a | s)) π(a | s) πD(a | s) a πD(a | s) π(a | s) πD(a | s) 1 a πD(a | s) π(a | s) πD(a | s) 1 2 + 0 since, X a π(a | s) = X a πD(a | s) = 1 = χ2(π( | s) || πD( | s)), using the definition of chi-square divergence Why X QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ2 regularization to the policy π with respect to the behavior policy πD, whereas our method does the same using the reverse-KL divergence. Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset. Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021): DKL(π || µ) = max x R Eµ[ e x] Eπ[x] 1, Published as a conference paper at ICLR 2023 which is maximized for x = log(π/µ). We can make a clever substitution to exploit the above relationship. Let x = (Q T π ˆQk)/β for a variable Q R and a fixed constant T π ˆQk, then on variable substitution we get the equation: Es ρµ[DKL(π( |s) || µ( |s))] = min Q L(Q), with L(Q) = Es ρµ,a µ( |s) h e(T π ˆ Qk(s,a) Q(s,a))/βi Es ρµ,a π( |s)[(T π ˆQk(s, a) Q(s, a))/β] 1 This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T π ˆQk β log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update. D EXPERIMENTS In this section we provide additional results and more details on all experimental procedures. D.1 A TOY EXAMPLE XQL Loss, Beta = 0.1 XQL Loss, Beta = 0.5 MSE Loss Figure 4: Here we show the effect of using different ways of fitting the value function on a toy grid world, where the agents goal is to navigate from the beginning of the maze on the bottom left to the end of the maze on the top left. The color of each square shows the learned value. As the environment is discrete, we can investigate how well Gumbel Regression fits the maximum of the Q-values. As seen, when MSE loss is used instead of Gumbel regression, the resulting policy is poor at the beginning and the learned values fail to propagate. As we increase the value of beta, we see that the learned values begin to better approximate the optimal max Q policy shown on the very right. D.2 BELLMAN ERROR PLOTS 0.2 0.1 0.0 0.1 0.2 0.3 0.4 Bellman Error Normalized Frequency Cheetah Run Gaussian Gumbel 0.2 0.1 0.0 0.1 0.2 0.3 Bellman Error Gaussian Gumbel 0.06 0.04 0.02 0.00 0.02 0.04 0.06 Bellman Error Gaussian Gumbel Figure 5: Additional plots of the error distributions of SAC for different environments. We find that the Gumbel distribution strongly fit the errors in first two environments, Cheetah and Walker, but provides a worse fit in the Hopper environment. Nonetheless, we see performance improvements in Hopper using our approach. Published as a conference paper at ICLR 2023 0.2 0.0 0.2 0.4 0.6 0.8 Bellman Error Normalized Frequency Cheetah Run Gaussian Gumbel 0.2 0.0 0.2 0.4 0.6 0.8 Bellman Error Gaussian Gumbel 0.006 0.004 0.002 0.000 0.002 0.004 0.006 Bellman Error Gaussian Gumbel Figure 6: Plots of the error distributions of TD3 for different environments. Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6, respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as: r(s, a) + γQθ1(s , πψ(s )) Qθ1(s, a) In the above equation Qθ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. πψ(s ) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. (2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy. D.3 NUMERIC STABILITY In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out emaxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below: def gumbel_loss(pred, label, beta, clip): z = (label - pred)/beta z = torch.clamp(z, -clip, clip) max_z = torch.max(z) max_z = torch.where(max_z < -1.0, torch.tensor(-1.0), max_z) max_z = max_z.detach() # Detach the gradients loss = torch.exp(z - max_z) - z*torch.exp(-max_z) - torch.exp(-max_z) return loss.mean() In some experiments we additionally clip the value of the gradients for stability. D.4 OFFLINE EXPERIMENTS In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details. Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X-QL, where X-QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods. We see that X-QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X-QL exhibits divergence on the Antmaze environment. We base our implementation of X-QL off the official implementation of IQL from Kostrikov et al. (2021). We use the same network architecture and also apply the Double-Q trick. We also apply the Published as a conference paper at ICLR 2023 Table 3: Evaluation on Adroit tasks from D4RL. X-QL-C gives results with the same hyper-parameters used in the Franka Kitchen as IQL, and X-QL-T gives results with per-environment β and hyper-parameter tuning. Dataset BC BRAC-p BEAR Onestep RL CQL IQL X-QL C X-QL T pen-human-v0 63.9 8.1 -1.0 - 37.5 71.5 85.5 85.5 hammer-human-v0 1.2 0.3 0.3 - 4.4 1.4 2.2 8.2 door-human-v0 2 -0.3 -0.3 - 9.9 4.3 11.5 11.5 relocate-human-v0 0.1 -0.3 -0.3 - 0.2 0.1 0.17 0.24 pen-cloned-v0 37 1.6 26.5 60.0 39.2 37.3 38.6 53.9 hammer-cloned-v0 0.6 0.3 0.3 2.1 2.1 2.1 4.3 4.3 door-cloned-v0 0.0 -0.1 -0.1 0.4 0.4 1.6 5.9 5.9 relocate-cloned-v0 -0.3 -0.3 -0.3 -0.1 -0.1 -0.2 -0.2 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hopper-medium-expert IQL XQL Tuned XQL Consistent 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-umaze 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 kitchen-partial Figure 7: Offline RL Results. We show the returns vs number of training iterations for the D4RL benchmark, averaged over 6 seeds. For a fair comparison, we use batch size of 1024 for each method. XQL Tuned tunes the temperature for each environment, whereas XQL consistent uses a default temperature. same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1, Table 2, and Table 3 for accurate comparison. We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For Mu Jo Co locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the Ant Maze tasks, we average over 1000 evaluation trajectories. We don t see stability issues in the mujoco locomotion environments, but found that offline runs for the Ant Maze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4. D.5 OFFLINE ABLATIONS In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets. Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X-QL method is not simply better than IQL due to bigger batch sizes, we show a comparison with a fixed batch size of 1024 in Fig. 7. D.6 ONLINE EXPERIMENTS We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017). Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X-SAC variant. We base our implementation of TD3 on the original author s code from Fujimoto et al. (2018). Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X SAC and TD3. We found that these values did not work as well for TD3 - DQ, and swept over values [3, 4, 10, 20]. In online Published as a conference paper at ICLR 2023 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 walker2d-medium-replay Beta 4 Beta 5 Beta 8 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-large-diverse Beta 0.4 Beta 0.6 Beta 0.8 Beta 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 kitchen-mixed Beta 1 Beta 2 Beta 5 Beta 8 Figure 8: β Ablation. Too large of a temperature β and performance drops. When β is too small, the loss becomes sensitive to noisy outliers, and training can diverge. Some environments are more sensitive to β than others. 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 halfcheetah-medium Batch 256 Batch 1024 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 halfcheetah-medium-expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-umaze-diverse Figure 9: Batch Size Ablation. Larger batch sizes can make Gumbel regression more stable. experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our Xvariants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall. Published as a conference paper at ICLR 2023 Table 4: Offline RL Hyperparameters used for X QL. The first values given are for the non per-environment tuned version of X QL, and the values in parenthesis are for the tuned offline results, X QL-T. V-updates gives the number of value updates per Q update, and increasing it reduces the variance of value updates using Gumbel loss on some hard environments. Env Beta Grad Clip Batch Size v_updates halfcheetah-medium-v2 2 (1) 7 (7) 256 (256) 1 (1) hopper-medium-v2 2 (5) 7 (7) 256 (256) 1 (1) walker2d-medium-v2 2 (10) 7 (7) 256 (256) 1 (1) halfcheetah-medium-replay-v2 2 (1) 7 (5) 256 (256) 1 (1) hopper-medium-replay-v2 2 (2) 7 (7) 256 (256) 1 (1) walker2d-medium-replay-v2 2 (5) 7 (7) 256 (256) 1 (1) halfcheetah-medium-expert-v2 2 (1) 7 (5) 256 (1024) 1 (1) hopper-medium-expert-v2 2 (2) 7 (7) 256 (1024) 1 (1) walker2d-medium-expert-v2 2 (2) 7 (5) 256 (1024) 1 (1) antmaze-umaze-v0 0.6 (1) 7 (7) 256 (256) 1 (1) antmaze-umaze-diverse-v0 0.6 (5) 7 (7) 256 (256) 1 (1) antmaze-medium-play-v0 0.6 (0.8) 7 (7) 256 (1024) 1 (2) antmaze-medium-diverse-v0 0.6 (0.6) 7 (7) 256 (256) 1 (4) antmaze-large-play-v0 0.6 (0.6) 7 (5) 256 (1024) 1 (1) antmaze-large-diverse-v0 0.6 (0.6) 7 (5) 256 (1024) 1 (1) kitchen-complete-v0 5 (2) 7 (7) 256 (1024) 1 (1) kitchen-partial-v0 5 (5) 7 (7) 256 (1024) 1 (1) kitchen-mixed-v0 5 (8) 7 (7) 256 (1024) 1 (1) pen-human-v0 5 (5) 7 (7) 256 (256) 1 (1) hammer-human-v0 5 (0.5) 7 (3) 256 (1024) 1 (4) door-human-v0 5 (1) 7 (5) 256 (256) 1 (1) relocate-human-v0 5 (0.8) 7 (5) 256 (1024) 1 (2) pen-cloned-v0 5 (0.8) 7 (5) 256 (1024) 1 (2) hammer-cloned-v0 5 (5) 7 (7) 256 (256) 1 (1) door-human-v0 5 (5) 7 (7) 256 (256) 1 (1) relocate-human-v0 5 (5) 7 (7) 256 (256) 1 (1) Table 5: Hyperparameters for online RL Algorithms Parameter SAC TD3 Batch Size 1024 256 Learning Rate 0.0001 0.001 Critic Freq 1 1 Actor Freq 1 2 Actor and Critic Arch 1024, 1024 256, 256 Buffer Size 1,000,000 1,000,000 Actor Noise Auto-tuned 0.1, 0.05 (Hopper) Target Noise 0.2 Table 6: Values of temperature β used for online experiments Env X-SAC X-TD3 X-TD3 - DQ Cheetah Run 2 5 4 Walker Run 1 2 4 Hopper Hop 2 2 3 Quadruped Run 5 5 20 Published as a conference paper at ICLR 2023 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 halfcheetah-medium XQL Tuned XQL Consistent 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 halfcheetah-medium-replay 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 halfcheetah-medium-expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hopper-medium 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hopper-medium-replay 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hopper-medium-expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 walker2d-medium 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 walker2d-medium-replay 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 walker2d-medium-expert Figure 10: Offline Mujoco Results. We show the returns vs number of training iterations for the mujoco benchmarks in D4RL (Averaged over 6 seeds). X-QL Tuned gives results after hyper-parameter tuning to reduce run variance for each environment, and X-QL consistent uses the same hyper-parameters for every environment. 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-umaze XQL Tuned XQL Consistent 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-umaze-diverse 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-medium-play 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-medium-diverse 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-large-play 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 antmaze-large-diverse Figure 11: Offline Ant Maze Results. We show the returns vs number of training iterations for the antmaze benchmarks in D4RL (Averaged over 6 seeds). X-QL Tuned gives results after hyper-parameter tuning to reduce run variance for each environment, and X-QL consistent uses the same hyper-parameters for every environment. Published as a conference paper at ICLR 2023 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 kitchen-complete XQL Tuned XQL Consistent 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 kitchen-partial 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 kitchen-mixed Figure 12: Offline Franka Results. We show the returns vs number of training iterations for the Franka Kitchen benchmarks in D4RL (Averaged over 6 seeds). X-QL Tuned gives results after hyper-parameter tuning to reduce run variance for each environment, and X-QL consistent uses the same hyper-parameters for every environment. 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 XQL Tuned XQL Consistent 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hammer-human 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 relocate-human 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 hammer-cloned 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 door-cloned 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 relocate-cloned Figure 13: Offline Androit Results. We show the returns vs number of training iterations for the Androit benchmark in D4RL (Averaged over 6 seeds). X-QL Tuned gives results after hyper-parameter tuning to reduce run variance for each environment, and X-QL consistent uses the same hyper-parameters for every environment. On some environments the consistent hyperparameters did best.