# statefree_reinforcement_learning__92e1c5a2.pdf State-free Reinforcement Learning Mingyu Chen Boston University mingyuc@bu.edu Aldo Pacchiano Boston University Broad Institute of MIT and Harvard pacchian@bu.edu Xuezhou Zhang Boston University xuezhouz@bu.edu In this work, we study the state-free RL problem, where the algorithm does not have the states information before interacting with the environment. Specifically, denote the reachable state set by SΠ := {s| maxπ Π q P,π(s) > 0}, we design an algorithm which requires no information on the state space S while having a regret that is completely independent of S and only depend on SΠ. We view this as a concrete first step towards parameter-free RL, with the goal of designing RL algorithms that require no hyper-parameter tuning. 1 Introduction Reinforcement learning (RL) studies the problem where an agent interacts with an unknown environment to optimize cumulative rewards/losses [Sutton and Barto, 2018]. While the nature of the environment is in principle hidden from the agent, many existing algorithms [Azar et al., 2017, Jin et al., 2018, Zanette and Brunskill, 2019, Zhang et al., 2020, 2021] implicitly require prior knowledge of parameters of the environment, such as the size of the state space, action space, time horizon and so on. Such parameters play a crucial role in these algorithms, as they are used in the construction of variable initializations, exploration bonuses, confidence sets, etc. However, in most real-world problems, these parameters are not known a priori, resulting in the need for the system designer to perform hyper-parameter tuning in a black-box fashion, which is known to be extremely costly in RL compared to their supervised learning counterparts [Pacchiano et al., 2020]. In supervised learning algorithms, selecting among M hyper-parameters only degrades the sample complexity by a factor of O(log(M)). In contrast, in RL problems it will incur a O( M) multiplier on the regret, making hyper-parameter tuning prohibitively expensive. This is one of the major roadblocks to broader applicability of RL to real-world scenarios. Motivated by the above observation, we propose and advocate for the study of parameter-free reinforcement learning, i.e. the design of RL algorithms that have no or as few hyper-parameters as possible, with the eventual goal of eliminating the need for heavy hyper-parameter tuning in practice. As a concrete first step, in this paper, we focus on the problem of state-free RL in tabular MDPs. In particular, we will show that there exist state-free RL algorithms which do not require the state space S as an input parameter to the algorithm, nor do their regret scale with the innate state space size |S|. In particular, we design a black-box reduction framework called State-Free Reinforcement Learning (SFRL). Given any existing RL algorithm for stochastic or adversarial MDPs, this framework can transform it into a state-free RL algorithm through a black-box reduction. We also show that the same framework can be adapted to induce action-free and horizon-free algorithms, the three of which now makes a tabular MDP algorithm completely parameter-free, i.e. it requires no input parameters whatsoever, and their regret bound automatically adapt to the intrinsic complexity of the problem. The rest of the paper is organized as follows. Following the discussion of related works and problem formulation, we start by discussing the technical challenges of state-free learning and why existing algorithmic and analysis framework are not able to achieve it (Section 4). Built upon these insights, we propose an intuitive black-box reduction framework SF-RL, that transforms any RL algorithm 38th Conference on Neural Information Processing Systems (Neur IPS 2024). into a state-free RL algorithm, albeit incurring a multiplicative cost to the regret (Section 5). Further improvements are then made to eliminate the additional cost through a novel confidence interval design, which can be of independent interest (Section 6). 2 Related Works Parameter-free algorithms: Acknowledgedly, parameter-free learning is not a new concept and has been studied extensively in the optimization and online learning community. Parameter-free algorithms refer to algorithms that do not require the learner to specify certain hyperparameters in advance. These algorithms are appealing in both theory and practice, considering that tuning algorithmic parameters is a challenging task [Bottou, 2012, Schaul et al., 2013]. The types of hyperparameters to set free varies depending on the specific problem. For example, for online learning and bandit problems, the hyperparameters are considered as the scale bound of the losses [De Rooij et al., 2014, Orabona and Pál, 2018, Duchi et al., 2011, Chen and Zhang, 2023], or the range of the decision set [Orabona and Pál, 2016, Cutkosky and Orabona, 2018, Zhang et al., 2022, van der Hoeven et al., 2020]; for neural network optimization, the hyperparameters can be the learning rate of the optimizer [Defazio and Mishchenko, 2023, Carmon and Hinder, 2022, Ivgi et al., 2023, Cutkosky et al., 2024, Khaled and Jin, 2024]; for model selection, the hyperparameters are the choice of the hypothesis class [Foster et al., 2017, 2019]. Surprisingly, the reinforcement learning (RL) community has overlooked the concept of parameterfree learning almost entirely. To the best of our knowledge, the only related work is from Chen and Zhang [2024], where the authors proposed an algorithm that adapts to the scale of the losses in the setting of adversarial MDPs. In this work we focus on the problem of developing parameter-free RL algorithms where the parameter to be focused on are those related to the environment transition, particularly the state space. Almost all RL algorithms assume knowledge of the state-space. For example, existing UCB-based reinforcement learning algorithms [Azar et al., 2017, Jin et al., 2018, Zanette and Brunskill, 2019, Zhang et al., 2020, 2021] make use of the state space size to construct the UCB bonus. When the state space is unknown, it is unclear whether these algorithms can still build a valid UCB bonus that ensures optimism and achieve bounded regrets. Instance-dependent algorithms: Instance-dependent learning is a closely related concept to parameter-free learning. Instance-dependent algorithms dynamically adjust to the input data they find, and achieve a regret that not only scaling with the number of iterations T, but also adapt to certain measures of hardness of the environment. Such algorithms perform better than the worst-case regret if the environment is benign . In reinforcement learning, the most common measures of hardness considered in the community are Variance [Zanette and Brunskill, 2019, Zhou et al., 2023, Zhang et al., 2023, Zhao et al., 2023] and Gap [Simchowitz and Jamieson, 2019, Xu et al., 2021, Dann et al., 2021, Jonsson et al., 2020, Wagenmaker et al., 2021, Tirinzoni et al., 2021], both related to the reward of the environment. Specifically, variance-dependent algorithms provide regret bounds that scale with the underlying conditional variance of the Q function. Gap-dependent algorithms provide regret bounds of order O(log T/gap(s, a)) where the gap notion is defined as the difference of the optimal value function and the Q -function at a sub-optimal action V (s) Q (s, a) [Dann et al., 2021]. Additionally, some studies consider problems similar to ours, that is, how to adapt to the measure of hardness of the state space. Given an initial state space, Fruit et al. [2018] proposes an algorithm that adapts to the size of the reachable state space, resulting in improved performance when the initial state space is vacuous. The difference between instance-dependent algorithm and parameter-free algorithm is subtle. Both family of algorithms have the capability to adapt to the input data, allowing them to sequentially tune the hyperparameters and ultimately converge to the optimal hyperparameters inherent in the data. Consequently, when the number of iterations becomes sufficiently large, both instance-dependent algorithms and parameter-free algorithms tend to provide the same theoretical guarantees. However, this does not mean that the two types of algorithms are the same. The most significant difference is that instance-dependent algorithms require appropriate hyper-parameters initialization. Taking state-space adaptability as an example. Let N represent the true number of states. An instancedependent algorithm must be provided with an initial value M N. If this value is invalid, i.e., M < N, the algorithm will fail to function properly. Moreover, the regret of instance-dependent algorithms is typically related to the initial input, even though this dependency may fade away as the number of iterations increases. This is also why we cannot simply set M to infinity in an instance-dependent algorithm and call it parameter-free, that is, the regret of an instance-dependent algorithm always includes some burn-in terms that scale with M. As M goes to infinity, the burn-in term eventually dominates. In this sense, parameter-free learning is a strictly harder problem than instance-dependent learning. 3 Problem Formulation Markov Decision Process: This paper focuses on the episodic MDP setting with finite horizon, unknown transition, and bandit feedback. A MDP is defined by a tuple M = (S, A, H, P), where S = {1, . . . , S} denotes the state space, A = {1, . . . , A} denotes the action space, and H denotes the planning horizon. P : S A S [0, 1] is an unknown transition function where P(s |s, a) is the probability of reaching state s after taking action a in state s. For every t [T], we define ℓt : S A [0, 1] as the loss function. In stochastic MDPs, the loss function ℓt is drawn from a time-independent distribution. In adversarial MDPs, the loss function ℓt is determined by the adversary, which can depend on the player s actions before t. The learning proceeds in T episodes. In each episode t, the learner starts from state s1 and decides a stochastic policy πt Π : S A [0, 1] with πt(a|s) being the probability of taking action a in state s. Afterwards, the learner executes the policy in the MDP for H steps and observes a state-action-loss trajectory (s1, a1, ℓt(s1, a1), . . . , s H, a H, ℓt(s H, a H)) before reaching the end state s H+1. With a slight abuse of notation, we assume ℓt(π) = E[P h [H] ℓt(sh, ah)|P, π]. The performance is measured by the regret, which is defined by t=1 ℓt(πt) min π Π Without loss of generality, we consider a layered-structure MDP: the state space is partitioned into H + 2 horizons S0, . . . , SH+1 such that S = H h=1Sh, = Si Sj for every i = j, S0 = {s0} and SH+1 = {s H+1}. Occupancy measure: Given the transition function P and a policy π, the occupancy measure q : S A [0, 1] induced by P and π is defined as q P,π(s, a) = h=1 P (sh = s, ah = a|P, π) . Using occupancy measures, the MDP problem can be interpreted in a way that makes it similar to Multi-armed Bandit (MAB) because for any policy π, the loss can be expressed as a [A] q P,π(s, a)ℓt(s, a) = q P,π, ℓt . Using this formula the regret can be written as R(T) = PT t=1 q P,πt q P,π , ℓt . State-free RL: We say a state s S is reachable to a policy set Π if there exists a policy π Π such that q P,π(s) > 0. We further define SΠ = {s S| maxπ Π q P,π(s) > 0} to represent all the reachable states to Π in S. The formal definition state-free algorithm is proposed below. Definition 3.1. (State-free algorithm): We say a RL algorithm is state-free if given any policy set Π, the regret bound for the algorithm can be adaptive to |SΠ| and independent to |S|, without any knowledge of the state space a priori. At first glance, designing state-free algorithms appears straightforward: if the learner had access to the transition P, it can compute maxπ Π q P,π(s) for every state s S and then remove all the unreachable states, thereby reducing the state space S to SΠ. Through this reduction, any existing MDP algorithm can be made state-free. However, such a method is infeasible since P is always unknown in practice. Without the knowledge of P, it becomes challenging or even impossible to determine whether a state is reachable or not. In the following section, we elaborate on the technical challenges of the problem for both stochastic and adversarial loss settings. 4 Technical challenges In this section, we explain the technical challenges for the state-free learning. Specifically, we consider a weakened setup. We assume for a moment that the algorithm has access to the state space S but not the reachable space SΠ. It is clear that this setup is weaker than the state-free definition, as in the state-free setting, the information about S is also unknown. We start with the stochastic setting, where the loss function ℓt is sampled by a time-independent distribution for all t [T]. As the most prominent setting in RL research, numerous works have since been devoted to improving the regret guarantee and the analysis framework [Brafman and Tennenholtz, 2003, Kakade, 2003, Jaksch et al., 2010, Azar et al., 2017, Jin et al., 2018, Dann et al., 2017, Zanette and Brunskill, 2019, Bai et al., 2019, Zhang et al., 2020, 2021, Ménard et al., 2021, Li et al., 2021, Domingues et al., 2021]. Surprisingly, although existing works have not mentioned the state-free concept explicitly, we find that some algorithms can almost achieve state-free learning without algorithmic modifications. In particular, we have Proposition 4.1. For stochastic MDPs, UCBVI [Azar et al., 2017] is a weakly state-free algorithm, that is, with only the knowledge of S, the regret guarantee of UCBVI is adaptive to |SΠ| and independent to |S|, except in the logarithmic terms. Proposition 4.1 offers some positive insights into existing algorithms. The source of the logdependence on |S| is straight-forward: the analysis of RL algorithms needs to ensure that concentration inequalities hold for all states with probability at least 1 δ. At this point, since the events among the states are independent from each other, the algorithms have to take a union bound across the state space to make concentration holds simultaneously in all states, which implies that the confidence level δ needs to be divided by |S|. This leads to a regret guarantee that scale with log(|S|). Remark 4.2. In Appendix A, we propose a simple technique to get rid of the log-dependence on |S| under the UCBVI framework. The key is to allocate the confidence for each visited (s, a) pairs sequentially, instead of applying a uniform confidence allocation across all states in |S|. We further show that such a method removes the need of S information in the algorithm design. Based on this method, it is suffices to conclude that (a modified version of) UCBVI is a state-free algorithm. We now turn our attention to the adversarial setting. In adversarial MDPs, the loss is determined by the adversary and can be depend on previous actions. Adversarial MDPs have been studied extensively in recent years Jin et al. [2019], Dai et al. [2022], Lee et al. [2020], Luo et al. [2021]. Given the positive results for stochastic MDPs, one might hope that existing adversarial MDP algorithms can naturally achieve a state-free regret guarantees. Unfortunately, this is not the case. In particular, we have the following observation. Observation 4.3. (Informal) In adversarial MDPs, using the existing algorithms and analysis framework, the regret guarantee cannot escape a polynomial-level dependence on |S|. Here we briefly explain Observation 4.3. In all prior works on adversarial MDPs, the analysis relies on bounding the gap between the approximation transition function ˆP and the true one P, i.e., P s SΠ A ˆP( |s, a) P( |s, a) 1. In this case, for any state s S, regardless of whether s is reachable or not, the estimation error | ˆP(s |s, a) P(s |s, a)| may remain non-zero for all (s, a) pairs. Consequently, the larger the |S|, the greater the inaccuracy of the transition estimation. At this point, one may wonder if the learner can directly set ˆP(s |s, a) = 0 for all unvisited s S, so that | ˆP(s |s, a) P(s|s, a)| is always zero when s is unreachable. However, since the learner does not have the knowledge of P, it is impossible to determine whether a state is unreachable, even if the learner has never visited the state before. In this regard, if s is actually reachable, the transition estimator will become invalid. Such dilemma constitutes the main challenge of the problem. The above offers some high-level intuitions into the complexity of designing state-free algorithms. In the next section, we introduce our new algorithms for state-free RL that operate without prior knowledge of S. Figure 1: An illustration of the mapping between the state space S and the pruned space S . The left side represents the original state space S, where grey nodes denote the states in S and red nodes denote the others. The right side is the corresponding pruned space S , where blue nodes denote the auxiliary states {s h }h [H]. Given the structure, for any trajectory in space S (purple arrows), we can find a dual trajectory (yellow and green arrows) in the pruned space. 5 Black-box reduction for State-free RL In this section, we outline the main contribution of the paper. To generalize our results further, we denote the ϵ-reachable state space as SΠ,ϵ = {s S| maxπ Π q P,π(s) > ϵ}. By definition SΠ = SΠ,0. Our algorithm SF-RL is illustrated in Algorithm 1. The algorithm maintains a pruned state space, denoted by S , which includes all the identified ϵ-reachable states and H additional auxiliary states. Throughout t = 1, . . . , T, SF-RL first obtains the policy π t Π : S (A) from a black-box adversarial MDP algorithm, namely ALG, which operates on S . Then, by playing an arbitrary action on states not in S compatible with Π, it extends the pruned policy π t to πt Π, which is defined over the domain S, and then receives the trajectory ot after playing πt. Given the trajectory, if there exists a new state s S that can be confirmed to be ϵ-reachable, the algorithm will update S and restart the ALG subroutine. Otherwise, SF-RL pretends that the trajectory was produced only by interacting with S , and sends the pruned trajectory o t back to ALG. The key novelty of the algorithm lies in the design of the pruned space S and trajectory o t . For every h [H], the auxiliary state s h represents the collection of states Sh\S h . These behave as absorbing states, coalescing all the transitions to states not in S . Given the pruned space S and ot, we can build the pruned trajectory o t = {s h, a h, ℓ t(s h, a h)}h [H]. Specifically, o t can be split into two parts based on the horizon that first encounters the state not in S , i.e., h = arg maxh{s1:h S }. For the state-action-loss pairs before the split horizon, we set o t to be the same as in ot. Otherwise, we let the states to be the corresponding auxiliary states, the actions to be π , which represents an arbitrary action, and the loss to be zero. An illustration of the design is provided in Figure 1. In order to analyze the performance of our black-box SF-RL algorithm we assume the input ALG comes equipped with a regret bound, Assumption 5.1. (Regret guarantee for black-box algorithm ALG): With probability 1 δ, for all K > 0, the regret guarantee for ALG following K epochs of interaction with MDP M = (S, A, H, P) is bounded by RALG(K) reg (|S|, |A|, H, log (H|S||A|K/δ)) where reg(|S|, |A|, H, δ) is a coefficient that depends polynomially on |S|, |A|, H and log(H|S||A|K/δ). Moreover, we assume that the coefficient reg is non-decreasing as |S|, |A|, H, K, 1/δ increase. This definition works for most algorithms in both stochastic and adversarial environments, e.g., for stochastic MDPs, by setting ALG as UCBVI Azar et al. [2017], the coefficient can be set as O(H p |S||A| log(|S||A|K/δ)); for adversarial MDPs, by setting ALG as UOB-REPS Jin et al. [2019], the coefficient can be set as O(H|S| p |A| log(|S||A|K/δ)). If reg( ) is the regret coefficient function for input algorithm ALG, the regret bound for Algorithm 1 satisfies Theorem 5.2. With probability 1 δ, the state-free algorithm SF-RL achieves regret bound 1 R(T) O reg |SΠ,ϵ| + H, |A|, H, log H|SΠ,ϵ||A|T/δ q |SΠ,ϵ|T + ϵH|SΠ|T . As shown in Theorem 5.2, the regret bound consists of two terms. The first term is p |SΠ,ϵ| times the regret of the black-box algorithm ALG over T iterations, while the second term can be considered as the regret incurred by the barely reachable states we have disregarded. The trade-off between these two terms is reasonable because it is impossible to discard states that are not ϵ-reachable without incurring any cost. By setting ϵ = 0, Theorem 5.2 immediately provides a regret bound adaptive to the unknown state size |SΠ| 2. Additionally, we remark that SF-RL does not require any prior knowledge about the state space in the algorithm design, which means that SF-RL is state-free by design. Below we discuss the main steps in establishing the above result. Proof Highlight: We first define P : S A S [0, 1] be the underlying transition function on the pruned space S . Specifically, for every h [H], we set P (s |s, a) = P(s |s, a), (s, a, s ) S h \ {s h } A S h+1 \ {s h+1} P (s |s, a) = 1 X s S h+1\{s h+1} P(s |s, a), (s, a, s ) S h \ {s h } A {s h+1} P (s |s, a) = 1{s = s h+1}, (s, a, s ) s h A S h+1. Similarly, we define ℓ t : S A [0, 1] to be the loss function on the pruned space S , which satisfies that ℓ t (s, a) = ℓt(s, a), s {s h }h [H] 0, otherwise , (s, a) S A Note that the tuple M = (S , A, H, P ) is a well-defined MDP. In what follows we use the subscript t to represent the estimators of the objects above at the beginning of epoch t, e.g., S t , P t . The key lemma of the proof is the following. Lemma 5.3. It suffices to consider o t , which is the pruned trajectory corresponding to ot, as an instance by executing policy π t on the pruned space S with transition function P t and loss ℓ t . Lemma 5.3 reveals how the black-box algorithm ALG can work. By Assumption 5.1, in order to make the regret independent to |S|, we let ALG perform on the pruned MDP M instead of M. However, since M is actually a virtual MDP, we cannot account for ALG s interaction with it. To ensure that ALG can be updated correctly, in Lemma 5.3 we show the pruned trajectory o t can be viewed as a trajectory from executing policy π t on M and ℓ t . Denote the optimal in-hindsight policy as π = arg minπ Π PT t=1 ℓt, π and let π be the corresponding policy on the pruned space, we start by the regret decomposition below. t=1 q P t ,π t q P t ,π , ℓ t t=1 q P,πt q P,π , ℓt t=1 q P t ,π t q P t ,π , ℓ t Here, term 1 represents ALG s regret and term 2 corresponds to the sum of the error incurred by the difference between S and S . Bounding 1 : Let intervals I1, . . . , IM be a partition of [T], such that P t = P (m) for all t It. We can rewrite the regret as t Im q P (m),π t q P (m),π , ℓ t . 1For brevity we consider |Sπ| T. Detailed regret is provided in the appendix. 2Note that the optimal choice of ϵ should not be 0 in general, e.g., by setting ALG as UCBVI and ϵ = 1/T, SF-RL achieves regret O(H|SΠ,1/t| p |A|T + H|SΠ|). Such regret will be much smaller than the regret under ϵ = 0 when |SΠ,1/t| |SΠ|. Since P m=1 δ/2m2 δ, using Lemma 5.3 and Assumption 5.1, 1 can be bounded below with probability at least 1 δ. m=1 reg |S (m)|, |A|, H, log 2m2H|S (m)||A||Im|/δ p Now we continue the proof by bounding M and S (m). As in SF-RL, a state s S will be added in the pruned space if it satisfies Pt j=1 1j {s}/2 log(2H2t2/δ) 1/2 > ϵt. Such a design ensures that all states added in S are at least ϵ-reachable, which is formalized in the following lemma. Lemma 5.4. With probability 1 δ, for every state s S, it will be added in S only if the state is ϵ-reachable, i.e., maxπ Π q P,π(s) > ϵ. By Lemma 5.4, it suffices to say that |S (m)| |Sπ,ϵ| + H for all m [M] and M |Sπ,ϵ|, as the states not in Sπ,ϵ cannot be added in S . This result also implies that ALG can be restarted at most |SΠ,ϵ| times, thus M |SΠ,ϵ| + 1. Given the above, we can finally bound 1 by 1 reg |SΠ,ϵ| + H, |A|, H, log 2H(|SΠ,ϵ| + H)3|A|T/δ M X reg |SΠ,ϵ| + H, |A|, H, log 2H(|SΠ,ϵ| + H)3|A|T/δ q Bounding 2 : The proof relies on the following lemma. Lemma 5.5. Given the pruned space S and the corresponding transition P , for any policy π, there is 0 q P,π, ℓt q P ,π , ℓ t H X s SΠ q P,π(s)1{s S }. Using Lemma 5.5, we immediately have t=1 q P,πt, ℓt t=1 q P t ,π t , ℓ t s S q P,πt(s)1t{s S t }. It then suffices to bound the right hand side of the inequality. Denote by Xt = P s S 1t{s}1t{s S t }. By definition, we have Xt [0, H] and E[Xt|Ft 1] = P s S q P,π(s)1t{s S t }. Using Lemma B.1 in the appendix, with probability 1 δ, we have t=1 1t{s}1t{s S t } + 2H2 log 1 As in SF-RL, if a state has been visited 2ϵt+2 log(2H2T 2/δ)+2 times, the state will be added in S , which means 1j{s}1j{s S j } will be 0 for the rest j > t. This implies that PT t=1 1t{s}1t{s S t } is at most 2ϵT + 2 log(2H2t2/δ) + 2 for all s S. Moreover, if a state is not reachable by any policy in Π, we always have PT t=1 q P,π(s)1t{s S t } = 0. Therefore, we can conclude that 2 2ϵH|SΠ|T + 2H2|SΠ| log(2H2T 2/δ) + 2|SΠ| + 2H2 log(1/δ). Finally, realizing that we have conditioned on the events stated in Assumption 5.1, Lemma 5.4 and Lemma C.1, which happens with probability at least 1 3δ. By combining 1 and 2 and rescaling δ, we complete the proof. Remark 5.6. Interestingly, the SF-RL framework can be extended to build horizon-free and actionfree algorithm, that is, algorithms that do not require the horizon length (when the horizon length is variable) and action space as input parameters. Specifically, given S , we denote H by the maximum horizon corresponding to the states in S \ {s h }H h=1, which represents the maximum horizon index among identified ϵ-reachable states. We further denote A by the actions corresponding to S \ {s h }H h=1. When S is updated, we let ALG restart with hyper-parameters (S 1:H , A , H , P 1:H ), where S 1:H and P 1:H represent the states and transitions within the first H horizons of S and P . By using the sub-trajectory of o t within the first H horizons as the trajectory input of ALG, it suffices to note that Lemma 5.3 still holds, thereby the black-box reduction also works. With such extension, SF-RL requires no hyper-parameter from the environment and can be regarded as completely parameter-free. Algorithm 1 Black-box Reduction for State-free RL (SF-RL) 1: Input: action space A, horizon H, black-box algorithm ALG, confidence δ, pessimism level ϵ 2: for t = 1 to T do 3: Receive policy π t : S (A) from ALG 4: Derive πt : S (A) such that πt( |s) = π t ( |s), s S π ( |s), otherwise 5: Play policy πt, receive trajectory ot = {sh, ah, ℓt(sh, ah)}h [H] 6: if s ot, s.t., s S , Pt j=1 1j {s}/2 log 2H2t2/δ /2 1/2 > ϵt then 7: Update S = S {s S : Pt j=1 1j {s}/2 log 2H2t2/δ /2 1 > ϵt} 8: Update policy set Π = π ( |s) Π( |s), s S \ {s h }h [H] π ( |s) {a }, otherwise 9: Restart ALG with state space S , action space A, policy set Π and confidence δ 2|S |2 10: else 11: Derive the pruned trajectory o t = {s h, a h, ℓ t(s h, a h)}h [H] such that s h = sh, s1:h S s h , otherwise p; a h = ah, s1:h S a , otherwise ; ℓ t(s h, a h) = ℓt(sh, ah), s1:h S 0, otherwise 12: Send the pruned trajectory o t to ALG 13: end if 14: end for 6 Improved regret bound for State-free RL In the previous section, we introduce a black-box framework SF-RL that transforms any existing RL algorithm into a state-free RL algorithm. However, the regret guarantee for SF-RL is suboptimal: compared to ALG itself, SF-RL incurs an p |SΠ,ϵ| multiplicative term to the regret bound. This is mainly because SF-RL needs to restart the black-box algorithm ALG whenever S updates. Such a restarting strategy inevitably leads to the loss of the learned MDP model. For this reason, and in order to achieve optimal regret rates, we need to design state-free algorithms that do not lose model information. In this section, we introduce a novel approach that enables SF-RL to retain previous transition information after restarting ALG. We illustrate that such a method improves the regret guarantee of SF-RL by a p |SΠ,ϵ| term for adversarial MDPs when combined with a specific choice of ALG. This bound matches the best known regret bound for adversarial MDPs given known state space. In existing adversarial MDP algorithms, the model information is captured within the confidence set of transition functions. Take Jin et al. [2019] as an example. For epoch t 1, let Nt(s, a) and Mt(s |s, a) be the total number of visits of pair (s, a) and (s, a, s ) before epoch t. The confidence set of Jin et al. [2019] is defined as Pt = n ˆP : ˆP(s |s, a) Pt(s |s, a) ϵt(s |s, a), (s, a, s ) Sh A Sh+1, h o , where Pt(s |s, a) = Mt(s |s, a)/ max(1, Nt(s, a)) is the empirical transition function for epoch t and ϵt(s |s, a) is the confidence width defined as ϵt(s |s, a) = 2 v u u t Pt(s, a) ln 4T |S||A| max{1, Nt(s, a) 1} + 14 ln 4T |S||A| 3 max{1, Nt(s, a) 1}. As in Lemma 2 of Jin et al. [2019], by empirical Bernstein inequality and a union bound, one can establish that P Pt for all t > 0 with probability at least 1 δ. Intuitively, such a construction of confidence set tends to be overly conservative for our state-free setup. On the one hand, it requires taking a union bound over all (s, a, s ) S A S pairs, resulting in an inevitable log-dependence on |S|. On the other hand, even if state s is unreachable, the confidence width ϵt(s |s, a) is not zero for all (s, a). Furthermore, since SF-RL operates within the pruned space S , it is necessary to construct the confidence set on S instead of S. Given by these observations, we propose a new construction of the confidence sets. For every s S, we denote t(s) by the epoch index when the algorithm first accesses to state s. If a state s has not been visited, we define t(s) = . Without of loss generality, we denote by t(s, s ) = max{t(s), t(s )} the index when both s and s are reached. Denote P t t (s |s, a) = (Mt(s |s, a) Mt (s |s, a))/ max{1, Nt(s, a) Nt (s, a)} be the partial empirical transition function corresponding to epochs [t + 1, t]. We further define SΠ t as the states visited before t. For every t [T], we build P t such that ( ˆP : ˆP (s |s, a) It(s |s, a), (s, a, s ) S h \ {s h } A S h+1 \ {s h+1}, h ˆP (s |s, a) = 1{s = s h+1}, (s, a, s ) {s h } A S h+1, h where It(s |s, a) = I1 t (s |s, a) I2 t (s |s, a). I1 t (s |s, a) and I2 t (s |s, a) are two confidence intervals defined by I1 t (s |s, a) = h P t(s,s ) t (s |s, a) ϵ1 t(s |s, a) i , I2 t (s |s, a) = 0, ϵ2 t(s |s, a) t(s ) t(s) + 1 [0, 1] else , ϵ1 t(s |s, a) = 4 v u u t P t(s,s ) t (s |s, a) log (t/δ(s, a, s )) max Nt(s, a) Nt(s,s )(s, a) 1, 1 + 20 log (t/δ(s, a, s )) max Nt(s, a) Nt(s,s )(s, a) 1, 1 , ϵ2 t(s |s, a) = 2|SΠ t(s )| + 24 log (t/δ(s, a)) max{Nt(s )(s, a) 1, 1} , Here, I1 t (s |s, a) and I2 t (s |s, a) are two Bernstein-type confidence intervals. Let us explain the high-level ideas of the design. First, to avoid wasting confidence on unreachable states, we initialize the confidence level of I1 t (s |s, a) only if both s and s are visited. The probability parameter δ(s |s, a) is Ft(s,s )-measurable, because it only depends on the data before epoch t(s, s ). Thus, in order to avoid correlation, we can only use the data collected after epoch t(s, s ) + 1 to construct the confidence interval I1 t (s |s, a). This leads to a problem: when t(s ) is much greater than t(s), we drop too much data that could be used to estimate P(s |s, a), resulting in I1 t (s |s, a) being loose compared to the existing confidence interval designed in Jin et al. [2019]. To address this issue, we introduce the second confidence interval I2 t (s |s, a). The logic behind the estimator I2 t (s |s, a) is that t(s ) t(s) when the probability P(s |s, a) is very small and therefore Nt(s )(s, a) 1 can be used to certify an upper bound to P(s |s, a). The confidence level of I1 t (s |s, a) can only be determined after epoch t(s ), whereas the confidence interval I2 t (s |s, a) is constructed based on data between t(s) and t(s ). By combining I1 t (s |s, a) and I2 t (s |s, a), such a construction makes use of all the data after t(s) to ensure a tight confidence interval. Considering that t(s) is the first time s is reached, we essentially lose only one data point, which is acceptable. By carefully designing the confidence level δ(s, a, s ) and δ(s, a), one can show that Lemma 6.1. Let i(s) be the index of state s sorted by the arriving time. By setting δ(s, a) = δ 4i(s)2|A| and δ(s, a, s ) = δ 4(i(s)4+i(s )4)|A|, with probability at least 1 δ, there is P t P t for all t [T]. Lemma 6.1 shows that such a construction of the confidence set is valid. Based on the new confidence set, we show that the regret bound of SF-RL can be improved by taking P t as an additional input to the black-box algorithm ALG. We summarize the result as follows. Theorem 6.2. (Informal) By initializing ALG as UOB-REPS Jin et al. [2019] and taking P t as an additional input to ALG every epoch, with probability 1 δ, the state-free algorithm SF-RL achieves regret bound |A|T log |SΠ|A|T which matches the best existing result of non-state-free algorithms for adversarial MDPs. 7 Conclusion This paper initiates the study of state-free RL, where the algorithm does not require the information of state space as a hyper-parameter input. Our framework SF-RL allows us to transform any existing RL algorithm into a state-free RL algorithm through a black-box reduction. Future work includes extending the framework SF-RL from the tabular setting to the setting with function approximation. Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, pages 263 272, 2017. Yu Bai, Tengyang Xie, Nan Jiang, and Yu-Xiang Wang. Provably efficient Q-learning with low switching cost. In Advances in Neural Information Processing Systems, pages 8004 8013, 2019. Léon Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade: Second Edition, pages 421 436. Springer, 2012. Ronen I. Brafman and Moshe Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3(Oct):213 231, March 2003. ISSN 1532-4435. Yair Carmon and Oliver Hinder. Making sgd parameter-free. In Conference on Learning Theory, pages 2360 2389. PMLR, 2022. Mingyu Chen and Xuezhou Zhang. Improved algorithms for adversarial bandits with unbounded losses. ar Xiv preprint ar Xiv:2310.01756, 2023. Mingyu Chen and Xuezhou Zhang. Scale-free adversarial reinforcement learning. ar Xiv preprint ar Xiv:2403.00930, 2024. Ashok Cutkosky and Francesco Orabona. Black-box reductions for parameter-free online learning in banach spaces. In Conference On Learning Theory, pages 1493 1529. PMLR, 2018. Ashok Cutkosky, Aaron Defazio, and Harsh Mehta. Mechanic: A learning rate tuner. Advances in Neural Information Processing Systems, 36, 2024. Yan Dai, Haipeng Luo, and Liyu Chen. Follow-the-perturbed-leader for adversarial markov decision processes with bandit feedback. Advances in Neural Information Processing Systems, 35:11437 11449, 2022. Christoph Dann, Tor Lattimore, and Emma Brunskill. Unifying PAC and regret: Uniform PAC bounds for episodic reinforcement learning. Advances in Neural Information Processing Systems, 30, 2017. Christoph Dann, Teodor Vanislavov Marinov, Mehryar Mohri, and Julian Zimmert. Beyond valuefunction gaps: Improved instance-dependent regret bounds for episodic reinforcement learning. Advances in Neural Information Processing Systems, 34:1 12, 2021. Steven De Rooij, Tim Van Erven, Peter D Grünwald, and Wouter M Koolen. Follow the leader if you can, hedge if you must. The Journal of Machine Learning Research, 15(1):1281 1316, 2014. Aaron Defazio and Konstantin Mishchenko. Learning-rate-free learning by d-adaptation. In International Conference on Machine Learning, pages 7449 7479. PMLR, 2023. Omar Darwiche Domingues, Pierre Ménard, Emilie Kaufmann, and Michal Valko. Episodic reinforcement learning in finite mdps: Minimax lower bounds revisited. In Algorithmic Learning Theory, pages 578 598, 2021. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011. Dylan J Foster, Satyen Kale, Mehryar Mohri, and Karthik Sridharan. Parameter-free online learning via model selection. Advances in Neural Information Processing Systems, 30, 2017. Dylan J Foster, Akshay Krishnamurthy, and Haipeng Luo. Model selection for contextual bandits. Advances in Neural Information Processing Systems, 32, 2019. Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, and Ronald Ortner. Efficient bias-span-constrained exploration-exploitation in reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, pages 1578 1586, 2018. Maor Ivgi, Oliver Hinder, and Yair Carmon. Dog is sgd s best friend: A parameter-free dynamic step size schedule. In International Conference on Machine Learning, pages 14465 14499. PMLR, 2023. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563 1600, 2010. Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 4868 4878, 2018. Chi Jin, Tiancheng Jin, Haipeng Luo, Suvrit Sra, and Tiancheng Yu. Learning adversarial mdps with bandit feedback and unknown transition. ar Xiv preprint ar Xiv:1912.01192, 2019. Anders Jonsson, Emilie Kaufmann, Pierre Ménard, Omar Darwiche Domingues, Edouard Leurent, and Michal Valko. Planning in markov decision processes with gap-dependent sample complexity. Advances in Neural Information Processing Systems, 33:1253 1263, 2020. Sham M Kakade. On the sample complexity of reinforcement learning. Ph D thesis, University of London London, England, 2003. Ahmed Khaled and Chi Jin. Tuning-free stochastic optimization. ar Xiv preprint ar Xiv:2402.07793, 2024. Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, and Mengxiao Zhang. Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps. Advances in neural information processing systems, 33:15522 15533, 2020. Gen Li, Laixi Shi, Yuxin Chen, Yuantao Gu, and Yuejie Chi. Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning. Advances in Neural Information Processing Systems, 34, 2021. Haipeng Luo, Chen-Yu Wei, and Chung-Wei Lee. Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. Advances in Neural Information Processing Systems, 34:22931 22942, 2021. Pierre Ménard, Omar Darwiche Domingues, Xuedong Shang, and Michal Valko. UCB momentum Q-learning: Correcting the bias without forgetting. In International Conference on Machine Learning, pages 7609 7618, 2021. Francesco Orabona and Dávid Pál. Coin betting and parameter-free online learning. Advances in Neural Information Processing Systems, 29, 2016. Francesco Orabona and Dávid Pál. Scale-free online learning. Theoretical Computer Science, 716: 50 69, 2018. Aldo Pacchiano, My Phan, Yasin Abbasi Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, and Csaba Szepesvari. Model selection in contextual stochastic bandit problems. Advances in Neural Information Processing Systems, 33:10328 10337, 2020. Tom Schaul, Sixin Zhang, and Yann Le Cun. No more pesky learning rates. In International conference on machine learning, pages 343 351. PMLR, 2013. Max Simchowitz and Kevin G Jamieson. Non-asymptotic gap-dependent regret bounds for tabular mdps. Advances in Neural Information Processing Systems, 32, 2019. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. Andrea Tirinzoni, Matteo Pirotta, and Alessandro Lazaric. A fully problem-dependent regret lower bound for finite-horizon MDPs. ar Xiv preprint ar Xiv:2106.13013, 2021. Dirk van der Hoeven, Ashok Cutkosky, and Haipeng Luo. Comparator-adaptive convex bandits. Advances in Neural Information Processing Systems, 33:19795 19804, 2020. Andrew Wagenmaker, Max Simchowitz, and Kevin Jamieson. Beyond no regret: Instance-dependent pac reinforcement learning. ar Xiv preprint ar Xiv:2108.02717, 2021. Haike Xu, Tengyu Ma, and Simon Du. Fine-grained gap-dependent bounds for tabular mdps via adaptive multi-step bootstrap. In Conference on Learning Theory, pages 4438 4472. PMLR, 2021. Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds. In Proceedings of the 36th International Conference on Machine Learning, 2019. Zhiyu Zhang, Ashok Cutkosky, and Ioannis Paschalidis. Pde-based optimal strategy for unconstrained online learning. ar Xiv preprint ar Xiv:2201.07877, 2022. Zihan Zhang, Yuan Zhou, and Xiangyang Ji. Almost optimal model-free reinforcement learning via reference-advantage decomposition. In Advances in Neural Information Processing Systems, 2020. Zihan Zhang, Xiangyang Ji, and Simon Du. Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon. In Conference on Learning Theory, pages 4528 4531, 2021. Zihan Zhang, Yuxin Chen, Jason D Lee, and Simon S Du. Settling the sample complexity of online reinforcement learning. ar Xiv preprint ar Xiv:2307.13586, 2023. Heyang Zhao, Jiafan He, Dongruo Zhou, Tong Zhang, and Quanquan Gu. Variance-dependent regret bounds for linear bandits and reinforcement learning: Adaptivity and computational efficiency. ar Xiv preprint ar Xiv:2302.10371, 2023. Runlong Zhou, Zhang Zihan, and Simon Shaolei Du. Sharp variance-dependent bounds in reinforcement learning: Best of both worlds in stochastic and deterministic environments. In International Conference on Machine Learning, pages 42878 42914, 2023. A Omitted details for Section 2 A.1 Details for Proposition 4.1 In this subsection, we illustrate that UCBVI is actually a weakly state-free algorithm. As in Azar et al. [2017], UCBVI algorithm consists of two parts: value iteration and policy execution. In every epoch, policy execution executes a policy that is greedy on the current Q values and adds newly encountered trajectory to the dataset. Then, value iteration uses this dataset to update the Q values of state-action pairs. Specifically, value iteration proceeds from the horizons H, . . . , 1. For every (s, a) Sh A, the update can be expressed as Q(s, a) = min{H, r(s, a) + P( |s, a), V ( ) + b(s, a)}, where P( |s, a) is the empirical transitions estimation, V ( ) = maxa A Q( , a) is the corresponding V value, and b(s, a) is the exploration bonus, which is defined by b(s, a) = c HL 1 Nt(s, a), where L = log |S||A|T Throughout the algorithm, we can note that the knowledge of S is only applied in the design of the exploration bonus. Specifically, the exploration bonus is designed to ensure that the following event holds for all epochs with probability at least 1 δ by Hoeffding s inequality and a union bound. ξ = | P( |s, a) P( |s, a), V ( ) | b(s, a), (s, a) S A (1) When the reachable space SΠ is known, by substituting S with SΠ in advance, as in Theorem 2 of Azar et al. [2017], the regret of UCBVI is well bounded by O(H p |SΠ||A|T log(|SΠ||A|T/δ)). When SΠ is unknown, we have to utilize S to design the bonus, resulting in the exploration bonus being amplified by a factor of p log(|S|)/log(|Sπ|). In this case, by optimism lemma, the regret guarantee increases by at most the same factor. Such a result suggests that UCBVI is weakly state-free. A.2 Details for Remark 4.2 Based on Proposition 4.1, here we show how to escape the log-dependence on |S| under the UCBVI framework. The idea is simple: when constructing the exploration bonus, instead of allocating confidence δ/|S||A|T to every state-action-epoch pair uniformly, we initialize the confidence level for states based on their arriving time. Let i(s) be the index of state s sorted by the arriving time. The exploration bonus is set by b(s, a) = c HL 1 max{Nt(s, a) 1, 1}, where L = log 2|i(s)|2|A|T Broadly speaking, for every visited state s S, we allocate confidence δ/2|i(s)|2|A|T to its corresponding state-action-epoch pair. In this regard, the confidence allocated to state s is bounded by δ/2|i(s)|2, and the total confidence is bounded by P i=1 δ/2i2 δ. Specifically, to avoid the correlation between the confidence level and the subsequent confidence sequence, we initialize the confidence sequence of P( |s, a) P( |s, a), V ( ) at epoch t(s) + 1, where t(s) is the epoch index that state s is visited for the first time. This is because the confidence level δ/2|i(s)|2|A|T can be determined when the algorithm first reaches state s. In this way, we loose one data point for constructing the confidence sequence. This is why the count of visits is max{Nt(s, a) 1, 1} rather than Nt(s, a). Additionally, by the definition of b(s, a), it suffices to note that b(s, a) H for the epochs before t(s), which implies that the bonus ensures optimism for all epochs before t(s) with probability 1. Combining with the above, it suffices to note that with probability 1 δ/2|i(s)|2|A|T, the bonus b(s, a) ensures optimism for all epochs. By a union bound, we can conclude that our proposed exploration bonuses also make the event in (1) hold with probability 1 δ. Given that i(s) |SΠ| for all visited states, the new exploration bonus is of the same order as the bonus where SΠ is known. In this way, we make the regret bound be completely independent to |S|, and turn UCBVI to a fully state-free algorithm. A.3 Details for Proposition 4.3 In this subsection, we demonstrate that, within the current analysis framework for adversarial MDPs, eliminating the polynomial dependence on |S| is impossible. To the best of our knowledge, in order to handle the unknown transitions, all existing works share the same idea that maintain a confidence set Pt of transition functions for t [T]. The confidence set can be denoted by Pt n ˆP : | ˆP(s |s, a) P(s |s, a)| ϵt(s |s, a), (s, a, s ) S A S o , where ϵt(s |s, a) is the confidence width and P(s |s, a) is the empirical estimation of the true transition P(s |s, a). Specifically, ϵt(s |s, a) is set based on empirical Bernstein inequality, i.e., ϵt(s |s, a) = O P(s |s, a) log(SAT/δ) Nt(s, a) + log(SAT/δ) , (s, a, s ) S A S, where Nt(s, a) denotes the counter of state-action pair (s, a). This setting ensures that P Pt for all t [T] with probability at least 1 δ. Given the confidence set, the algorithm selects ˆPt Pt as the approximation of P, and chooses policy πt based on the approximation transition. For a clear understanding, we decompose the regret into two terms, i.e., t=1 q P,πt q P,π , ℓt = t=1 q ˆ Pt,πt q P,π , ℓt | {z } REGRET t=1 q P,πt q ˆ Pt,πt, ℓt | {z } ERROR Here, the first term REGRET represents the regret of the algorithm with the approximation transition. In some senses, bounding REGRET can be reduced to a bandit problem. In every round t, the algorithm chooses an occupancy measure ˆqt (Pt, Π) and corresponding ( ˆPt, πt) (Pt, Π), then obtain a partial observation of the loss ℓt. For the second term ERROR, it corresponds to the error using ˆPt to approximate P. Considering the adversarial environment, there exists a worst enough loss sequence ℓ1, . . . , ℓT such that (s,a) SΠ A |q P,πt(s, a) q ˆ Pt,πt(s, a)|. Thus, bounding ERROR is essentially equates to bounding the right hand side of the above. At this point, one needs to demonstrate that the confidence set shrinks in the correct rate over time, so that the sum of the gap between q P,πt and q ˆ Pt,πt can be well bounded. Having provided sufficient background, we now explain why existing methods fail to achieve statefree regret bounds. First, since the confidence set requires to work for all (s, a, s ) pairs, we have to take a union bound on the good event for all (s, a, s ) S A S. This essentially cause ϵt(s |s, a) to be log-dependent on |S|. More important, to bound |q P,πt(s, a) q ˆ Pt,πt(s, a)|, existing methods mainly follow the proof of Lemma 4 in Jin et al. [2019], that is, for every (s, a) SΠ A and πt Π, there exists ˆPt Pt such that q ˆ Pt,πt(s, a) q P,πt(s, a) sh,ah,sh+1 ϵt(sh+1|sh, ah)q P,πt(sh, ah)q ˆ Pt,πt(s, a|sh+1). By the definition of confidence width, we have ϵt(sh+1|sh, ah) O(1/Nt(sh, ah)) for all (sh, ah, sh+1) pairs. Furthermore, if state sh+1 is unvisited, the algorithm has no information for the transitions after the state. Assuming that Sh = SΠ h for all h [H], there will always exist a worst enough ˆPt Pt such that the probability of reaching s via sh+1 with policy πt is 1, i.e., q ˆ Pt,πt(s|sh+1) = 1. In this case, we have X q ˆ Pt,πt(s, a) q P,πt(s, a) O 1 Nt(sh, ah) q P,πt(sh, ah) X a A q ˆ Pt,πt(s, a|sh+1) O 1 Nt(sh, ah) q P,πt(sh, ah)1 {sh+1 is unreachable} sh,ah q P,πt(sh, ah)(|Sh+1| |SΠ h+1|) Ph(s) 1 h=1 (|Sh+1| |SΠ h+1|) t Based on the analysis, it suffices to see that ERROR is at least of order O(P m [H] |SΠ h |Ph(s) 1 h=1 (|Sh+1| |SΠ h+1|)), which polynomially depends on |S| when |SΠ| |S|. Such a result suggests that current analysis cannot derive state-free regret bound for adversarial MDP. B Omitted proof of Section 3 B.1 Proof of Lemma 5.3 Fix S , we consider an pruned trajectory o t derived by SF-RL such that o t = {s1, a1, ℓt(s1, a1), . . . , sh, ah, ℓt(sh, ah), s h+1, a , 0, . . . , s H, a , 0}. By definition, it suffices to note that s1:h S and and sh+1 S . In the following, we complete the proof by demonstrating that the likelihood of obtaining o t using algorithm SF-RL is the same to the likelihood of obtaining o t by executing π t on P and ℓ t . We first analyze the likelihood of obtaining o t by SF-RL. Let s first review the algorithm. Initially, SF-RL executes policy πt and obtains the trajectory ot, which is on the underlying transition P and loss ℓt. Then, using S , SF-RL degrades ot to the pruned trajectory o t . In order to generate o t defined above, ot needs to satisfy 1). at horizon 1 to h, the state-action pairs are (s1, a1) to (sh, ah) respectively. 2). at horizon h + 1, the visited state cannot belong to S . Therefore, the likelihood can be denoted by P(o t |SF-RL) = πt(ah|sh) k=0 πt(ak|sk) k=0 P(sk+1|sk, ak) | {z } Likelihood of visiting {sk,ak}h k=1 s S h+1\{s h+1} P(s |s, a) | {z } Likelihood of visiting states not in S at horizon h+1 Now we study the likelihood of obtaining o t by executing π t on P and ℓ t . By definition, for every s S, there is ℓ t (s, a) = ℓt(s, a) if s S , and ℓ t (s, a) = 0 otherwise. Using the observation, we can rewrite the loss ℓt(sk, ak) by ℓ t (sk, ak) for k 1, . . . , h, and rewrite the rest zero loss by ℓ t (s k , a ). Therefore, it suffices to focus on the likelihood of obtaining state-action pairs of o t . In this regard, we have P(o t |P , ℓ t , π t ) k=0 π t (ak|sk) k=0 P (sk+1|sk, ak) π t (ah|sh)P (s h+1|sh, ah) k=h+1 π t (a |sk) k=0 P (s k+1|s k , a ) k=0 π t (ak|sk) k=0 P (sk+1|sk, ak) π t (ah|sh)P (s h+1|sh, ah) k=0 πt(ak|sk) k=0 P (sk+1|sk, ak) πt(ah|sh)P (s h+1|sh, ah) k=0 πt(ak|sk) k=0 P (sk+1|sk, ak)πt(ah|sh) s S h+1\{s h+1} P (s |s, a) where the first equality is because QH k=h+1 π t (a |sk)P (s k+1|s k , a ) = 1 by definition. The second and third equalities are by the definition of P and π t , that is, π t (ak|sk) = πt(sk, ak) and P (sk+1|sk, ak) = P(sk+1|sk, ak) for k = 0, . . . , h, The last equality is by the definition of P (s h+1|sh, ah). Using the above, it suffices to show that P(o t |SF-RL) = P(o t |P , ℓ t , π t ), thereby we complete the proof. B.2 Proof of Lemma 5.4 The proof is mainly based on the following lemma. Lemma B.1. Let Ft for t > 0 be a filtration and (Xt 0)t N+ be a sequence of non-negative random variables with E[Xt|Ft 1] = Pt with Pt being Ft 1-measurable. Given confidence level δ being Ft-measurable, there is j=t+1 Pj + log 1 j=t+1 Xj + log 1 Let i(s) be the index of state s sorted by the arriving time. Recall t(s) is the epoch when the algorithm first accesses to state s. Apparently, i(s) is Ft(s)-measurable. Using Lemma B.1 and a union bound, we immediately have s S, n > t(s), j=t(s)+1 q P,πj(s) > 2 log(2i(s)2/δ) Assuming the above holds for true. By SF-RL, a state s will be added in S only if Pt j=1 1j{s}/2 log(2i(s)2/δ)/2 1/2 ϵt. Since there are at most Hn states that can be visited before epoch t, we have i(s) Ht(s) Hn. Moreover, it is obvious that Pt j=1 1j{s} = Pt j=t(s)+1 1j{s} + 1 by the definition of t(s). Thus, we have n max π Π q P,π(s) j=t(s)+1 q P,πj(s) > 2 log(2i(s)2/δ) 2 log(2H2n2/δ) which implies that state s is ϵ-reachable. This completes the proof. B.3 Proof of Lemma 5.5 To prove Lemma 5.5, the key observation is that q P ,π , ℓ t = E h=1 ℓ t (sh, ah)|P , π # h=1 ℓt(sh, ah)1{s1:h S }|P, π where the first equality is by occupancy measure and the second equality is by Lemma 5.3. Using the observation, we have q P,π, ℓt q P ,π , ℓ t = E h=1 ℓt(sh, ah)|P, π h=1 ℓ t (sh, ah)|P , π # h=1 ℓt(sh, ah)|P, π h=1 ℓt(sh, ah)1{s1:h S }|P, π thus we have q P,π, ℓt q P ,π , ℓ t 0 and q P,π, ℓt q P ,π , ℓ t HE 1{ h, sh S }|P, π H X s SΠ q P,π(s)1{s S }, which completes the proof. C Proof of Section 4 C.1 Proof of Lemma 6.1 We first fix a horizon h. For every (s, a, s ) Sh A Sh+1, considering that δ(s, a, s ) is Ft(s,s )-measurable, by empirical Bernstein inequality and a union bound, we immediately have P(s |s, a) I1 t (s |s, a) for all t t(s, s ) + 1 with probability 1 δ(s, a, s ). By the definition of P , there is P (s |s, a) = P(s |s, a) once s, s S . Furthermore, the confidence sequence of P (s |s, a) is initialized once both states s and s have been visited, which is potentially as soon as epoch t(s, s ) + 1. Given P s S,a A,s S δ(s, a, s ) δ/2, it suffices to say the following event ξ1 = P (s |s, a) I1 t (s |s, a); t [T], (s, a, s ) S h \ {s h } A S h+1 \ {s h+1}, h holds true with probability at least 1 δ/2. It now suffices to focus on the second confidence interval I2 t (s |s, a). For every (s, a) Sh A, we define τ=t(s) 1τ{s , s, a} = 0 be the states such that the state-action-state pair (s, a, s ) is unvisited before epoch t. Notice that Ss,a t is Ft 1-measurable and E[1t{Ss,a t |s, a}|Ft 1] = P(Ss,a t |s, a). Given a Ft(s)-measurable confidence δ(s, a), by empirical Bernstein inequality and a union bound, it suffices to claim that τ=t(s)+1 P(Ss,a τ |s, a)1τ{s, a} τ=t(s)+1 1τ{Ss,a τ |s, a}1τ{s, a} τ=t(s)+1 1τ{Ss,a τ |s, a}1τ{s, a} log 2t2 + 14 log 2t2 δ(s,a) 3 , t t(s) + 1 with probability at least 1 δ(s, a). By the definition of Ss,a t , it suffices to note that Pt τ=t(s)+1 1τ{Ss,a τ |s, a}1τ{s, a} |SΠ t |. Using the above, we can note that τ=t(s)+1 P(Ss,a τ |s, a)1τ{s, a} |SΠ t | + 2|SΠ t | log 2t2 + 14 log 2t2 δ(s,a) 2|SΠ t | + 24 log t δ(s, a) with probability at least 1 δ(s, a). Consider a state s S such that t(s ) t(s) + 1. By definition, it suffices to note that s Ss,a τ for all t(s) + 1 τ t(s ), which implies that P(s |s, a) P(Ss,a τ |s, a) for all t(s) + 1 τ t(s ). In this case, we finally have Pt(s ) τ=t(s)+1 P(Ss,a τ |s, a)1τ{s, a} Pt(s ) τ=t(s)+1 1τ{s, a} 2|SΠ t(s )| + 24 log t(s ) δ(s,a) max{Nt(s )(s, a) 1, 1} , s : t(s ) t(s) + 1 with probability at least 1 δ(s, a). Given P s S,a A δ(s, a) δ/2 and P (s |s, a) = P(s |s, a), it suffices to say that the following event ξ2 = P (s |s, a) I2 t (s |s, a); t [T], (s, a, s ) S h \ {s h } A S h+1 \ {s h+1}, h holds true with probability at least 1 δ/2. By a union bound of events ξ1 and ξ2, we complete the proof. C.2 Proof of Theorem 6.2 The proof of Theorem 6.2 is technical but mostly follows the same ideas of that for Lemma 4 in Jin et al. [2019]. First, as in Jin et al. [2019], we illustrate that the confidence set P t is tight enough, i.e., the difference between the true transition function and any transition function from the confidence set can be well bounded. Lemma C.1. Under the event of Lemma 6.1, for all epoch t [T], all ˆP t P t , all h = 0, . . . , H 1 and (s, a, s ) S h \ {s h } A S h+1 \ {s h+1}, we have ˆ P t (s |s, a) P t (s |s, a) O v u u t P (s |s, a) log |SΠ||A|T max {Nt(s, a), 1} + |SΠ| + log |SΠ||A|T max {Nt(s, a), 1} ϵ t (s |s, a) Lemma C.2. (Refined regret guarantee for Theorem 3 in Jin et al. [2019]) With probability 1 δ, for all K > 0, with confidence sets P1, . . . , PK, the regret guarantee for UOB-REPS following K epochs of interaction with MDP M = (S, A, H, P) and loss sequence ℓ1, . . . , ℓK is bounded by RUOB-REPS(K) O H|S||A||K log(|S||A||K/δ) + q P s k ,πk (s, a) q P,πk (s, a) |ℓk(s, a)| where {πk}k [K] is a collection of policies and { ˆP s k}s S,k [K] is a collection of transition functions selected by pessimism, i.e., for all s S and k [K], P s k = arg max ˆ P Pk q ˆˆP,πk (s, a) q P,πk (s, a) |ℓk(s, a)|. Recall the proof sketch in Section 5, we decompose the regret R(T) into 1 and 2 , which represent ALG s regret and the error incurred by the difference between S and S respectively. By the proof of Theorem 5.2, there is 2 O(ϵH|SΠ|T). It suffices to focus on 1 . By Lemma C.2, we have H|S (m)||A|||Im| log(|S (m)||A|||Im|/δ) + q P s t ,π t (s, a) q P t ,π t (s, a) |ℓ t (s, a)| |A||T log(|SΠ,ϵ||A||T/δ) + s S t \{s h }h [H],a A q P s t ,π t (s, a) q P t ,π t (s, a) where P s t P t for all t [T] and s S t . It suffices to focus on the second term. For every s S t \ {s h }h [H], a A, let h(s) be the index of horizon to which s belongs. According to the proof of Lemma 4 in Jin et al. [2019] (specifically their Eq. (15)), we have q P s t ,π t (s, a) q P t ,π t (s, a) sm S t,m,am A,sm+1 S t,m+1 P s t (sm+1|sm, am) P t (sm+1|sm, am) q P t ,π t (sm, am)q P s t ,π t (s, a|sm+1) where S t,m represents the states s S t at horizon m. Intuitively, in order to continue the proof, we should apply Lemma C.1 and bound |P s t (sm+1|sm, am) P t (sm+1|sm, am)| by ϵ t (sm+1|sm, am). However, when sm = s m or sm+1 = s m+1, the confidence width ϵ t (sm+1|sm, am) is not well defined. To address this, we show that the terms related to states s m or s m+1 can be disregarded. We prove case by case, i.e., 1. (sm = s m): By the definition of P t and P t , we always have P t (sm+1|sm, am) = P s t (sm+1|sm, am) = 1{sm+1 = s m+1}, which implies that |P t (sm+1|sm, am) P s t (sm+1|sm, am)| = 0. 2. (sm = s m, sm+1 = s m+1): By the definition of P t , after visiting state s m+1, the probability of visiting state s = s h(s) is zero. This means that q P s t ,π t (s, a|sm+1) = 0. 3. (sm = s m, sm+1 = s m+1): By Lemma C.1, we can bound |P s t (sm+1|sm, am) P t (sm+1|sm, am)| by ϵ t (sm+1|sm, am). Using the above, it suffices to show that q P s t ,π t (s, a) q P t ,π t (s, a) sm S t,m\{s m},am A,sm+1 S t,m+1\{s m+1} ϵ t (sm+1|sm, am)q P t ,π t (sm, am)q P s t ,π t (s, a|sm+1) sm S t,m\{s m},am A,sm+1 S t,m+1\{s m+1} ϵ t (sm+1|sm, am)q P,πt(sm, am)q P s t ,π t (s, a|sm+1), where the second inequality is by Lemma 5.5, i.e., q P t ,π t (sm, am) = q P t ,π t , 1{sm, am} q P,πt, 1{sm, am} = q P,πt(sm, am). By the exact same analysis, we also have q P s t ,π t (s, a|sm+1) q P t ,π t (s, a|sm+1) s h S t,h\{s h },a h A,s h+1 S t,h+1\{s h+1} ϵ t (s h+1|s h, a h)q P t ,π t (s h, a h|sm+1)q P s t ,π t (s, a|s h+1) s h S t,h\{s h },a h A,s h+1 S t,h+1\{s h+1} ϵ t (s h+1|s h, a h)q P t ,π t (s h, a h|sm+1) s h S t,h\{s h },a h A,s h+1 S t,h+1\{s h+1} ϵ t (s h+1|s h, a h)q P,π(s h, a h|sm+1) To simplify notation, in the following, we use the shorthands wh = (sh, ah, sh+1) and ϵ t (wh) = ϵ t (sh+1|sh, ah). We further denote W t,h by all the state-action-state pairs S t,h \{s h } A S t,h+1 \ {s h+1} and W Π,ϵ h by all the reachable state-action-state pairs SΠ,ϵ h A SΠ,ϵ h+1. Using the above two inequalities, we have q P s t ,π t (s, a) q P t ,π t (s, a) ϵ t (wm)q P,πt(sm, am)q P t ,π t (s, a|sm+1) ϵ t (wm)q P t ,π t (sm, am) ϵ t (w h)q P,πt(s h, a h|sm+1) ϵ t (wm)q P,πt(sm, am)q P,πt(s, a|sm+1) ϵ t (wm)q P,πt(sm, am) w h W Π,ϵ h ϵ t (w h)q P,πt(s h, a h|sm+1) The last inequality is based on Lemma 5.4, i.e., W t,h W Π,ϵ h for all t [T] and h [H] with probability 1 δ. Following the proof in Jin et al. [2019], we can take the sum over states s S t \ {s h }h [H] and actions a A. s S t \{s h }h [H],a q P s t ,π t (s, a) q P t ,π t (s, a) ϵ t (wm)q P,πt (sm, am)q P,πt (s, a|sm+1) ϵ t (wm)q P,πt (sm, am) w h W Π,ϵ h ϵ t (w h)q P,πt (s h, a h|sm+1) ϵ t (wm)q P,πt (sm, am) X s SΠ,ϵ h ,a q P,πt (s, a|sm+1) w h W Π,ϵ h ϵ t (wm)q P,πt (sm, am)ϵ t (w h)q P,πt (s h, a h|sm+1) X s SΠ,ϵ k ,a ϵ t (wm)q P,πt (sm, am) 0 m 0, Yn 1 δ, P n > 0, Zn 1 After taking log on both Yn and Zn, we get t=1 (Xt 2Pt) log 1 t=1 (Pt 2Xt) log 1 which completes the proof. D.2 Proof of Lemma C.1 Given t [T] and h [H] and (s, a, s ) S h \{s h } A S h+1\{s h+1}. Notice that t t(s) and t t(s ) by definition. Under the event of Lemma 6.1, it suffices to prove |I1 t (s |s, a) I2 t (s |s, a)| ϵ t (s |s, a). We discuss case by case. 1. (t(s ) > t(s) and Nt(s, a) 2Nt(s )(s, a)): In this case, there is Nt(s,s )(s, a) = Nt(s )(s, a) Nt(s, a)/2. Thus we have P t(s,s ) t (s |s, a) P t (s |s, a) + 4 v u u t P t(s,s ) t (s |s, a) log t δ(s,a,s ) max Nt(s, a) Nt(s,s )(s, a) 1, 1 + 20 log t δ(s,a,s ) max Nt(s, a) Nt(s,s )(s, a) 1, 1 P t (s |s, a) + 4 v u u t P t(s,s ) t (s |s, a) log t δ(s,a,s ) max {Nt(s, a) Nt(s, a)/2 1, 1} + 20 log t δ(s,a,s ) max {Nt(s, a) Nt(s, a)/2 1, 1} P(s |s, a) + 8 v u u t P t(s ) t (s |s, a) log t δ(s,a,s ) max {Nt(s, a) 2, 1} + 40 log t δ(s,a,s ) max {Nt(s, a) 2, 1} Viewing this as a quadratic inequality of q P t(s ) t (s |s, a), we have P t(s ) t (s |s, a) O P(s |s, a) + v u u t log t δ(s,a,s ) max {Nt(s, a) 2, 1} This leads to I1 t (s |s, a) O v u u t P(s |s, a) log t δ(s,a,s ) max {Nt(s, a), 1} + log t δ(s,a,s ) max {Nt(s, a), 1} 2. (t(s ) > t(s) and Nt(s, a) < 2Nt(s )(s, a)): By the definition of I2 t (s |s, a), we have I2 t (s |s, a) 2|SΠ| + 26 log t δ(s,a) max{Nt(s )(s, a) 1, 1} O |SΠ| + log t δ(s,a) max{Nt(s, a), 1} 3. (t(s ) t(s)): When t(s ) t(s), we have Nt(s,s )(s, a) = Nt(s)(s, a) 1, thereby Nt(s,s )(s, a) is also on the same order of Nt(s, a). Using a similar proof as the first case, it suffices to obtain I1 t (s |s, a) O v u u t P(s |s, a) log t δ(s,a,s ) max {Nt(s, a), 1} + log t δ(s,a,s ) max {Nt(s, a), 1} Using the above we complete the proof. D.3 Details for Corollary C.2 As in Jin et al. [2019], we decompose the regret into four terms RUOB-REPS(K) = k=1 q P,πk q ˆ Pk,πk, ℓk + k=1 q ˆ Pk,πk, ℓk ˆℓk k=1 q ˆ Pk,πk q P,π , ˆℓk + k=1 q P,π , ˆℓk ℓk . Here, ˆPk Pk is an approximation transition selected from the confidence set Pk. ˆℓk is the loss estimator, which is defined by ˆℓk(s, a) = ℓk(s, a) uk(s, a) + γ 1k{(s, a) is visited}, where γ is an adaptive exploration rate and uk(s, a) = max ˆ P Pk q ˆ P ,πk(s, a) is an upper bound of the probability of visiting state-action pair (s, a) with confidence set Pk. Now we show how to refine the regret bound proposed in Theorem 3 [Jin et al., 2019]. For the first term, we immediately have k=1 q P,πk q ˆ Pk,πk, ℓk s S,a A |q P,πk q ˆ Pk,πk||ℓk(s, a)| s S,a A |q P,πk q P s k ,πk||ℓk(s, a)| by the definition of P s k. For the second term, there is k=1 q ˆ Pk,πk, ℓk ˆℓk = k=1 q ˆ Pk,πk, ℓk E[ˆℓk] + k=1 q ˆ Pk,πk, E[ˆℓk] ˆℓk . Since q ˆ Pk,πk, ˆℓk H for sure, applying Azuma s inequality we have PK k=1 q ˆ Pk,πk, E[ˆℓk] ˆℓk H p 2K log(1/δ). It suffices to focus on PK k=1 q ˆ Pk,πk, ℓk E[ˆℓk] . k=1 q ˆ Pk,πk, ℓk E[ˆℓk] = s S,a A q ˆ Pk,πk(s, a)ℓk(s, a) 1 q P,πk(s, a) uk(s, a) + γ q ˆ Pk,πk(s, a) uk(s, a) + γ ℓk(s, a) uk(s, a) + γ q P,πk(s, a) s S,a A |uk(s, a) q P,πk(s, a)||ℓk(s, a)| + γ|S||A|T. Notice that uk(s, a) = max ˆ P Pk q ˆ P ,πk(s, a) = πk(a|s) max ˆ P Pk q ˆ P ,πk(s). Therefore, denote by ˆP = arg max ˆ P Pk q ˆ P ,πk(s), we have X a A |uk(s, a) q P,πk(s, a)||ℓk(s, a)| X a A |q ˆ P ,πk(s, a) q P,πk(s, a)||ℓk(s, a)| a A |q P s k ,πk(s, a) q P,πk(s, a)||ℓk(s, a)| by the definition of P s k. For the third and fourth terms, the proof completely follow the proof of Lemma 12 and 14 in Jin et al. [2019], i.e., k=1 q ˆ Pk,πk q P,π , ˆℓk O H ln(|S||A|) η + η|S||A|K + ηH ln(H/δ) k=1 q P,π , ˆℓk ℓk O H ln(|S||A|/δ) By setting η = γ = q H log(H|S||A|/δ) |S||A|K , we finally upper bound RUOB-REPS(K) by H|S||A|K log |S||A|K s S,a A |q P s k ,πk(s, a) q P,πk(s, a)||ℓk(s, a)| Notice that the proof above requires tuning the learning and exploration rate in terms of K. To remove the restriction, a standard method is to let learning rate and exploration rate be adaptive, i.e, ηk = γk = q H log(H|S||A|/δ) |S||A|k . Using the adaptive rates and taking a union bound over all k, we can obtain the results in Corollary C.2. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: The main claims made in the abstract and introduction accurately reflect the paper s contributions and scope. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [NA] Justification: The paper has no limitation. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: The paper provide the full set of assumptions and a complete (and correct) proof. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [NA] Justification: The paper fully disclose all the information needed to reproduce the main experimental results. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [NA] Justification: The paper provide open access to the data and code. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [NA] Justification: The paper specify all the training and test details. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [NA] Justification: The paper report error bars suitably and correctly. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [NA] Justification: It does not require significant compute. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [NA] Justification: The research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: There is no societal impact of the work performed. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: The paper does not use existing assets. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not release new assets. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.