# sprinql_suboptimal_demonstrations_driven_offline_imitation_learning__167e5fbf.pdf SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning Huy Hoang Singapore Management University mh.hoang.2024@phdcs.smu.edu.sg Tien Mai Singapore Management University atmai@smu.edu.sg Pradeep Varakantham Singapore Management University pradeepv@smu.edu.sg We focus on offline imitation learning (IL), which aims to mimic an expert s behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency. In this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL. 1 Introduction Reinforcement learning (RL) has established itself as a strong and reliable framework for sequential decision-making with applications in diverse domains: robotics [19, 18], healthcare [34, 28, 27], and environment generation [9, 24]. Unfortunately, RL requires an underlying simulator that can provide rewards for different experiences, which is usually not available. Imitation Learning (IL) [16, 29, 13, 17] handles the lack of reward function by utilizing expert demonstrations to guide the learning scheme to compute a good policy. However, IL approaches still require the presence of a simulator that allows for online interactions. Initial works in Offline IL [35, 23, 40, 2] tackle the absence of simulator by considering an offline dataset of expert 38th Conference on Neural Information Processing Systems (Neur IPS 2024). demonstrations. These approaches extend upon Behavioral Cloning (BC), where we aim to maximize the likelihood of the expert s decisions from the provided dataset. The key advantage with BC is the theoretical justification on converging to expert behaviors given sufficient trajectories. However, when there are not enough expert trajectories, it often suffers from distributional shift issues [30]. Thus, a key drawback of these initial IL approaches is the need for a large number of expert demonstration datasets. To deal with limited expert demonstrations, recent works utilize non-expert demonstration datasets to reduce the reliance on only expert demonstrations. These additional non-expert demonstrations are referred to as supplementary data. Directly applying BC to these larger supplementary datasets will lead to sub-optimal policies, so most prior work in utilizing supplementary data attempts to extract expert-like demonstrations from the supplementary dataset in order to expand the expert demonstrations [31, 21, 20, 37, 39]. These works assume expert-like demonstrations are present in the supplementary dataset and focus on identifying and utilizing those, while eliminating the non-expert demonstrations. Eliminating non-expert trajectories can result in loss of key information (e.g., transition dynamics) about the environment. Additionally, these works primarily rely on BC, which is known to overlook the sequential nature of decision-making problems a small error can quickly accumulate when the learned policy deviates from the states experienced by the expert. We develop our algorithm based on an inverse Q-learning framework that better captures the sequential nature of the decision-making [13] and can operate under the more realistic assumption that the data is collected from people/policies with lower expertise levels1 (not experts). To illustrate, consider a scenario in robotic manipulation where the goal is to teach a robot to assemble parts. Expert demonstrations might show precise and efficient methods to assemble parts, but are limited in number due to the high cost and time associated with expert involvement. On the other hand, sub-optimal demonstrations from novice users are easier to obtain and more abundant. Our SPRINQL approach effectively integrates these sub-optimal demonstrations, giving appropriate weight to the expert demonstrations to ensure the robot learns the optimal assembly method without overfitting to the limited expert data or the inaccuracies in the sub-optimal data. We utilize these non-expert trajectories to learn a Q function that contributes to our understanding of the environment and the ground truth reward function. Contributions: Overall, we make the following key contributions in this paper: (i) We propose SPRINQL, a novel algorithm based on Q-learning for offline imitation learning with expert and multiple levels of sub-optimal demonstrations. (ii) We provide key theoretical properties of the SPRINQL objective function, which enable the development of a scalable and efficient approach. In particular, we leverage distribution matching and reward regularization to develop an objective function for SPRINQL that not only help address the issue of limited expert samples but also utilizes non-expert data to enhance learning. Our objective function is not only convex within the space of Q functions but also guarantees the return of a Q function that lower-bounds its true value. (iii) We provide an extensive empirical evaluation of our approach in comparison to existing best algorithms for offline IL with sup-optimal demonstrations. Our algorithms provide state-of-the-art (SOTA) performance on all the benchmark problems. Moreover, SPRINQL is able to recover a reward function that shows a high positive correlation with the ground-truth rewards, highlighting a unique advantage of our approach compared to other IL algorithms in this context. 1.1 Related Work Imitation Learning. Imitation learning is recognized as a significant technique for learning from demonstrations. It begins with BC, which aims to maximize the likelihood of expert demonstrations. However, BC often under-performs in practice due to unforeseen scenarios [30]. To overcome this limitation, Generative Adversarial Imitation Learning (GAIL) [16] and Adversarial Inverse Reinforcement Learning (AIRL) [10] have been developed. These methods align the occupancy distributions of the policy and the expert within the Generative Adversarial Network (GAN) framework [14]. Alternatively, Soft Q Imitation Learning (SQIL) [29] bypasses the complexities of adversarial training by assigning a reward of +1 to expert demonstrations and 0 to the others, subsequently learning a value function based on these rewards.While the aforementioned imitation learning algorithms show 1While there have been works [5, 6, 7] that have attempted to minimize the required demonstrations using ranked datasets by preference-based RL, their algorithms are only applicable to online settings. promise, they require interaction with the environment to obtain the policy distribution, which is often impractical. Offline Imitation Learning. Value DICE [22] introduces a novel approach for off-policy training, suitable for offline training, using Stationary Distribution Corrections [25, 26]. However, Value DICE necessitates adversarial training between the policy network and the Q network, which can make the training slow and unstable. Recently, algorithms like PWIL [8] and IQ-learn [13] have optimized distribution distance, offering an alternative to adversarial training schemes. Since such approaches rely on occupancy distribution matching, a large expert dataset is often required to achieve the desired performance. Our approach, SPRINQL is able to bypass this requirement of a large set of expert demonstrations through the use of non-expert demonstrations (which are typically more available) in conjunction with a small set of expert demonstrations. Imitation Learning with Imperfect Demonstrations. T-REX [5] and D-REX [6] have shown that utilizing noise-ranked demonstrations as a reference-based approach can return a better policy without requiring expert demonstrations in online settings. Moreover, there are also several works [36, 33] that utilize the GAN framework [14] for sub-optimal datasets and have achieved several successes. Meanwhile, in the offline imitation learning context, TRAIL [38] utilizes sup-optimal demonstrations to learn the environment s dynamics. It employs a feature encoder to map the high-dimensional state-action space into a lower dimension, thereby allowing for a scalable way of learning of dynamics. This approach may face challenges in complex environments where predicting dynamics accurately is difficult, as shown in our experimental results. Other works assume that they can extract expert-like state-action pairs from the sub-optimal demonstration set and use them for BC with importance sampling [31, 37, 39, 21, 20]. However, expert-like state-actions might be difficult to accurately identify, as true reward information is not available. In contrast, our approach is more general, as we do not assume that the sub-optimal set contains expert-like demonstrations. We also allow for the inclusion of demonstrations of various qualities. Moreover, while prior works only recover policies, our approach enables the recovery of both expert policies and rewards, justifying the use of our method for Inverse Reinforcement Learning (IRL)[1]. 2 Background Preliminaries. We consider a MDP defined by the following tuple M = S, A, r, P, γ, s0 , where S denotes the set of states, s0 represents the initial state set, A is the set of actions, r : S A R defines the reward function for each state-action pair, and P : S A S is the transition function, i.e., P(s |s, a) is the probability of reaching state s S when action a A is made at state s S, and γ is the discount factor. In reinforcement learning (RL), the aim is to find a policy that maximizes the expected long-term accumulated reward maxπ E(s,a) ρπ[r(s, a)] , where ρπ is the occupancy measure of policy π: ρπ(s, a) = (1 γ)π(a|s) P t=1 γt P(st = s|π). Max Ent IRL The objective in Max Ent IRL is to recover a reward function r(s, a) from a set of expert demonstrations, DE. Let ρE be the occupancy measure of the expert policy. The Max Ent IRL framework [41] proposes to recover the expert reward function by solving max r min π EρE[r(s, a)] (Eρπ[r(s, a)] Eρπ[log π(s, a)]) (1) Intuitively, the aim is to find a reward function that achieves the highest difference between the expected value of the expert policy and the highest expected value among all other policies (computed through the min loop). IQ-Learn Given a reward function r and a policy π, the soft Bellman equation is defined as Bπ r [Q](s, a) = r(s, a) + γEs [V π(s )], where V π(s) = Ea π(a|s)[Q(s, a) log π(a|s)]. The Bellman equation Bπ r [Q] = Q is contractive and always yields a unique Q solution [13]. In IQ-learn, they further define an inverse soft-Q Bellman operator T π[Q] = Q(s, a) γEs [V π(s )]. [13] show that for any reward function r(a, s), there is a unique Q function such that Bπ r [Q ] = Q , and for a Q function in the Q-space, there is a unique reward function r such that r = T π[Q ]. This result suggests that one can safely transform the objective function of the Max Ent IRL from r-space to the Q-space as follows: max Q min π Φ(π, Q) = Eρ[T π[Q](s, a))] Eρπ[T π[Q](s, a)] + Eρπ[log π(s, a)] (2) which has several advantages; [13] show that Φ(π, Q) is convex in π and linear in Q, implying that (2) always yields a unique saddle point solution. In particular, (2) can be converted into a maximization over the Q-space, making the training problem no longer adversarial. We now describe our inverse soft-Q learning approach, referred to as SPRINQL (Sub-o Ptimal demonstrations driven Reward regularized INverse soft Q Learning). We first describe the three key components in the SPRINQL formulation: (1) We formulate the objective function that enables matching the occupancy distribution of not just expert demonstrations, but also sub-optimal demonstrations. (2) To mitigate the effect of limited expert samples (and larger sets of sub-optimal samples) that can bias the distribution matching of the first step to sub-optimal demonstrations, we introduce a reward regularization term within the objective. This regularization term is to ensure reward function allocates higher values to state-action pairs that appear in higher expertise demonstrations. (3) We show that while this new objective does not have the same advantageous properties as the one in inverse Q-learning [13], with some minor (yet significant) changes it is possible to restore all the important properties. 3.1 Distribution Matching with Expert and Suboptimal Demonstrations We consider a setting where there are demonstrations classified into several sets of different expertise levels D1, D2, ...., DN, where D1 consists of expert demonstrations and all the other sets contains suboptimal ones. This setting is general than existing work in IL with sup-optimal demonstrations, which typically assumes that there are only two quality levels: expert and sub-optimal. Let D = S be the union of all the demonstration sets and ρ1, ..., ρN be the occupancy measures of the respective expert policies. The ordering of expected values across different levels of expert policies would then be given by: Eρ1[r (s, a)] > Eρ2[r (s, a)] > ... > EρN [r (s, a)], where r (., .) are the ground-truth rewards. Typically, the number of demonstrations in first level, D1 is significantly lower than those from other expert levels, i.e., |D1| |Di|, for i = 2, ..., N The Max Ent IRL objective from Equation 1 can thus be adapted as follows: max r min π i [N] wi Eρi[r(s, a)] Eρπ[r(s, a)] + Eρπ[log π(s, a)] (3) where wi 0 is the weight associated with the expert level i [N] and we have w1 > w2 > ... > w N and P i [N] wi = 1. There are two key intuitions in the above optimization: (a) Expert level i accumulates higher expected values than expert levels greater than i; and (b) Difference in values accumulated by expert policies and the maximum of all other policies is maximized. The optimization term can be rewritten as: EρU [r(s, a)] Eρπ[r(s, a)] Eρπ[log π(s, a)], where ρU = P i [N] wiρi. Here we note that the expected reward P i [N] wi Eρi[r(s, a)] is empirically approximated by samples from the demonstration sets D1, D2, ...., DN. The number of demonstrations in the best demonstration set D1 (i.e. the set of expert demonstrations) is significantly lower when compared to other demonstration sets. So, an empirical approximation of Eρ1[r(s, a)] using samples from D1 would be inaccurate. 3.2 Regularization with Reference Reward We create a reference reward based on the provided expert and sub-optimal demonstrations and utilize the reference function to compute a regularization term that is added to the objective of Equation 3. Concretely, we define a reference reward function r(s, a) such that: r(s, a) > r(s , a ), (s, a) D1 and (s , a ) / D1 and r(s, a) > r(s , a ), (s, a) D2 and (s , a ) / D2 D1 and so on The aim here is to assign higher rewards to demonstrations from higher expertise levels, and zero rewards to those that do not belong to provided demonstrations. We will discuss how to concretely estimate such reference reward values later. We utilize this reference reward as part of the reward regularization term, which is added into the Max Ent IRL objective in (3) as follows: max r min π n EρU [r(s, a)] Eρπ[r(s, a)] + Eρπ[log π(s, a)] | {z } Occupancy matching αEρU [(r(s, a) r(s, a))2] | {z } Reward regularizer where α > 0 is a weight parameter for the reward regularizer term. With (4), the goal is to find a policy with an occupancy distribution that matches with the occupancy distribution of different expertise levels appropriately (characterized by the weights, wi). Simultaneously, it ensures that the learning rewards are close to the pre-assigned rewards, aiming to guide the learning policy towards replicating expert demonstrations, while also learning from sub-optimal demonstrations. 3.3 Concave Lower-bound on Inverse Soft-Q with Reward Regularizer Even though (4) can be directly solved to recover rewards, prior research suggests that transforming (4) into the Q-space will enhance efficiency. We delve into this transformation approach in this section. As discussed in Section 2, there is a one-to-one mapping between any reward function r and a function Q in the Q-space. Thus, the maximin problem in (4) can be equivalently transformed as: max Q min π n H(Q, π) def = EρU [T π[Q](s, a))] Eρπ[T π[Q](s, a))] + Eρπ[log π(s, a)] αEρU [(T π[Q](s, a)) r(s, a))2] o (5) where r(s, a) is replaced by T π[Q](s, a) and T π[Q](s, a) = Q(s, a) γEs [V π(s )], V π(s) = Ea π(a|s)[Q(s, a) log π(a|s)] In the context of single-expert-level, [13] demonstrated that the objective function in the Q-space as given in Equation 2 is concave in Q and convex in π, implying that the maximin problem always has a unique saddle point solution. Unfortunately, this property does not hold in our case. Proposition 3.1. H(Q, π) (as defined in Equation 5) is concave in Q but is not convex in π. In general, we can see that the first and second term of (6) are convex in Q, but the reward regularizer term, which can be written as αEρU h (Q(s, a) r(s, a) Es P (.|s,a)Ea π(.|s )(Q(s , a ) log π(s , a )))2i , is not concave in π (details are shown in the Appendix). The property indicated in Proposition 3.1 implies that the maximin problem within the Q-space max Q minπ J(Q, π) may not have a unique saddle point solution and would be more challenging to solve, compared to the original inverse IQ-learn problem. Another key property of Equation 2 is with regards to the inner minimization problem over π, which yields a unique closed-form solution, enabling the transformation of the max-min problem into a non-adversarial concave maximization problem within the Q-space. The closed-form solution was given by πQ = argmaxπ V π(s) for all s S. Unfortunately, this result also does not hold with the new objective function in (6), as formally stated below: Proposition 3.2. H(Q, π) may not necessarily be minimized at π such that π = argmaxπ V π(s), for all s S. To overcome the above challenges, our approach involves constructing a more tractable objective function that is a lower bound on the objective of (6). Let us first define Γ(Q) = minπ H(Q, π). We then look at the regularization term, which causes all the aforementioned challenges, and write: (T π[Q](s, a)) r(s, a))2 = (Q(s, a) r(s, a) Es [V π(s )])2 = (Q(s, a) r(s, a))2 + (Es [V π(s )])2 + 2(r(s, a) Q(s, a))Es [V π(s )] We then take out the negative part of (r(s, a) Q(s, a)) using Re LU, and consider a slightly new objective function as follows: b H(Q, π) def = X i [N] wi Eρi[T π[Q](s, a))] (Eρπ[T π[Q](s, a))] Eρπ[log π(s, a)]) αEρU h (Q(s, a) r(s, a))2 + (Es V π(s ))2 + 2Re LU(r(s, a) Q(s, a))Es V π(s ) i (6) Let bΓ(Q) = minπ b H(Q, π)). The proposition below shows that bΓ(Q) always lower-bounds Γ(Q). Proposition 3.3. For any Q 0, we have bΓ(Q) Γ(Q) and max Q bΓ(Q) max Q Γ(Q). Moreover, Γ(Q) = bΓ(Q) if Q(s, a) r(s, a) for all (s, a). We note that assuming Q 0 is not restrictive, as if the expert s rewards r (s, a) are non-negative (typically the case), then the true soft-Q function, defined as Q (s, a) = E[P st,at γt(r (s, a) log π(s, a))|(s0, a0) = (s, a)], should also be non-negative, for any π. As bΓ(Q) provides a lowerbound approximation of Γ(Q), maximizing bΓ(Q) over the Q-space would drive Γ(Q) towards its maximum value. It is important to note that, given that the inner problem involves minimization, obtaining an upper-bound approximation function is easier. However, since the outer problem is a maximization one, an upper bound would not be helpful in guiding the resulting solution towards optimal ones. The following theorem indicates that bΓ(Q) is more tractable to use. Theorem 3.4. For any Q 0, the following results hold: (i) The inner minimization problem minπ b H(Q, π) has a unique optimal solution πQ such that πQ = argminπV π(s) for all s S and πQ(a|s) = exp(Q(s,a)) P a exp(Q(s,a)), (ii) maxπ V π(s) = log(P a exp(Q(s, a))) def = V Q(s), and (iii) bΓ(Q) is concave for Q 0. The above theorem tells us that new objective bΓ(Q) has a closed form where V π(s) is replaced by V Q(s). Moreover bΓ(Q) is concave for all Q 0. The concavity is particularly advantageous, as it guarantees that the optimization objective is well-behaved and has a unique solution Q such that (Q , πQ ) form a unique saddle point of max Q minπ b H(Q, π). Thus, our tractable objective has all the nice properties that the original IQ-Learn objective had, while being able to work for the offline case with multiple levels of expert trajectories and our reward regularizer. 3.4 SPRINQL Algorithm Algorithm 1 SPRINQL: Inverse soft-Q Learning with Sub-optimal Demonstrations Require: (D1, D2, ..., DN), (w1, w2, ..., w N), rη, Qψ, πθ 1: # estimate reward reference function 2: for iteration i...Ne do 3: d (d1, d1, ..., d N) (D1, D2, ..., DN) 4: from dataset d, calculate L(rη) by (7) 5: η η ηL(rη) 6: end for 7: # train SPRINQL 8: for iteration i...N do 9: d (d1, d1, ..., d N) (D1, D2, ..., DN) 10: # Update Q function 11: from dataset d, calculate b HC(Qψ, πθ) by (8) 12: ψ ψ + ψ b HC(Qψ, πθ) 13: # Update policy for actor-critic 14: θ θ + h E s d a πθ(a|s)[Qψ(s, a) ln(πθ(a|s))] i 15: end for Algorithm 1 provides the overall SPRINQL algorithm. We first estimate the reference rewards in lines 2-6 of Algorithm 1 and the overall process is described in Section 3.4.1. Before we proceed to the overall training, we have to estimate the weights, wi (associated with the ranked demonstration sets) employed in ˆH(Q, π). We provide a description of this estimation procedure in Section 3.4.2. Finally, to enhance stability and mitigate over-estimation issues commonly encountered in offline Q-learning, we employ a conservative version of ˆH(Q, π) in lines 8-15 of the algorithm and is described in Section 3.4.3. Some other practical considerations are discussed in the appendix. 3.4.1 Estimating the Reference Reward We outline our approach to automatically infer the reference rewards r(s, a) from the ranked demonstrations. The general idea is to learn a function that assigns higher values to higher expert-level demonstrations. To achieve this, let us define R(τ) = P (s,a) τ r(s, a) (i.e., accumulated reward of trajectory τ). For two trajectories τi, τj, let τi τj denote that τi is lower in quality compared to τj (i.e., τi belongs to demonstrations from lower-expert policies, compared to τj). We follow the Bradley-Terry model of preferences [4, 5] to model the probability P(τi τj) as P(τi τj) = exp(R(τj)) exp(R(τi))+exp(R(τj)) and use the following loss function: r {L(r) = X (s,a),(s ,a ) Di (r(s, a) r(s , a ))2 X h,k [N],h>k,τi Dh,τj Dk ln P(τi τj)} (7) where the first term of L(r) serves to guarantee that the reward reference values for (s, a) pairs within the same demonstration group are similar, and the second term aims to increase the likelihood that the accumulated rewards of trajectories adhere to the expert-level order. Importantly, it can be shown below that L(r) is convex in r (Proposition 3.5), making the learning well-behaved. In practice, one can model r(s, a) by a neural network of parameters θ and optimize L(θ) over θ-space. Proposition 3.5. L(r) is strictly convex in r. 3.4.2 Preference-based Weight Learning for wi Each weight parameter wi used in (4) should reflect the quality of the corresponding demonstration set Di, which can be evaluated by estimating the average expert rewards of these sets. Although this information is not directly available in our setting, the reward reference values discussed earlier provide a convenient and natural way to estimate them. This leads to the following formulation for inferring the weights wi from the ranked data: wi = E(s,a) Di[ r(s,a)] P j [N] E(s,a) Dj [ r(s,a)]. 3.4.3 Conservative soft-Q learning Over-estimation is a common issue in offline Q-learning due to out-of-distribution actions and function approximation errors [11]. We also observe this in our IL context. To overcome this issue and enhance stability, we leverage the approach in [23] to enhance our inverse soft-Q learning. The aim is to learn a conservative soft-Q function that lower-bounds its true value. We formulate the conservative inverse soft-Q objective as: b HC(Q, π) = β X s D, a µ(a|s) [Q(s, a)] + b H(Q, π) (8) where µ(a|s) is a particular state-action distribution. We note that in (8), the conservative term is added to the objective function, while in the conservative Q-learning algorithm [23], this term is added to the Bellman error objective of each Q-function update. This difference makes the theory developed in [23] not applicable. In Proposition 3.6 below, we show that solving max Q b HC(Q) will always yield a Q-function that is a lower bound to the Q function obtained by solving max Q b H(Q, π). Proposition 3.6. Let b Q = argmax Q b H(Q, π) and b QC = argmax Q b HC(Q, π), we have P s D a µ(a|s) b QC(s, a) P s D a µ(a|s) b Q(s, a). We can adjust the scale parameter β in Equation 8 to regulate the conservativeness of the objective. Intuitively, if we optimize Q over a lower-bounded Q-space, increasing the scale parameter β will force each Q(s, a) towards its lower bound. Consequently, when β is sufficiently large, b QC will point-wise lower-bound b Q, i.e., b QC(s, a) b Q(s, a) for all (s, a). 4 Experiments 4.1 Experiment Setup Baselines. We compare our SPRINQL with SOTA algorithms in offline IL with sub-optimal demonstrations: TRAIL [38], Demo DICE [21], and DWBC [37]. Moreover, we also compare with other straightforward baselines: BC with only sub-optimal datasets (BC-O), BC with only expert data (BC-E), BC with all datasets (BC-both), and BC with fixed weight for each dataset (W-BC), SQIL [29], and IQ-learn [13] with only expert data. In particular, since TRAIL, Demo DICE, and DWBC were designed to work with only two datasets (one expert and one supplementary), in problems with multiple sub-optimal datasets, we combine all the sub-optimal datasets into one single supplementary dataset. Meanwhile, BC-E, BC-O, and BC-both combine all available data into a large dataset for learning, while W-BC optimizes P i Es,a Di wi ln(π(a|s)), where wi are our weight parameters. T-REX [5] is a PPO-based IL algorithm that can work with ranked demonstrations. It is, however, not suitable for our offline setting, so we do not include it in the main comparisons. Nevertheless, we will later conduct an ablation study to compare SPRINQL with an adapted version of T-REX. Moreover, [15] developed IPL for learning from offline preference data. This approach requires comparisons between every pair of trajectories, thus is not suitable for our context. Environments and Data Generation: We test on five Mujoco tasks [32] and four arm manipulation tasks from Panda-gym [12]. The maximum number transition of (s, a) per trajectory is 1000 (or 1k for short) for the Mujoco and is 50 for Panda-gym tasks (descriptions in Appendix B.3). The sub-optimal demonstrations have been generated by randomly adding noise to expert actions and interacting with the environments. We generated large datasets of expert and non-expert demonstrations. For each seed, we randomly sample subsets of demonstrations for testing. This approach allows us to test with different datasets across seeds, rather than using fixed datasets for all seeds as in previous works. More details of these generated databases can be found in the Appendix B.4. Metric: The return is normalized by score = return R.return E.return R.return 100 where R.return is the mean return of random policy and E.return is the mean return of the expert policy. The scores are calculated by taking the last ten evaluation scores of each seed, with five seeds per report. Experimental Concerns. Throughout the experiments, we aim to answer the following questions: (Q1) How does SPRINQL perform compared to other baselines? (Q2) How do the distribution matching and reward regularization terms impact the performance of SPRINQL? (Q3) What happens if we augment (or reduce) the expert data while maintaining the sub-optimal datasets? (Q4) What happens if we augment (or reduce) the sub-optimal data while maintaining the expert dataset? (Q5) How does the conservative term help in our approach? (Q6) How does increasing N (the number of expertise levels) affect the performance of SPRINQL? (Q7) Does the preference-based weight learning approach provide good values for the weights wi? (Q8) How does SPRINQL perform in recovering the ground-truth reward function? 4.2 Main Comparison Results In this section, we provide comparison results to answer (Q1) with three datasets (i.e., N = 3). Additional comparison results for N = 2 can be found in the appendix. From lowest to highest Mujoco Panda-gym Cheetah Ant Humanoid Push Pn P Slide Avg BC-E -3.2 0.9 6.4 19.1 1.3 0.2 8.2 3.8 3.7 2.7 0.0 0.0 2.7 BC-O 14.2 2.9 35.2 20.1 10.6 6.3 8.8 4.5 3.9 2.7 0.1 0.3 12.1 BC-both 13.2 3.6 47.0 5.9 9.0 3.5 9.0 4.3 4.4 3.0 0.1 0.4 13.8 W-BC 12.9 2.8 47.3 6.4 19.6 19.0 8.8 4.3 3.7 2.8 0.0 0.0 15.4 TRAIL -4.1 0.3 -4.7 1.9 2.6 0.6 11.7 4.0 7.8 3.7 1.7 1.8 3.9 IQ-E -3.4 0.6 -3.4 1.3 2.4 0.6 26.3 10.9 18.1 12.5 0.1 0.4 6.7 IQ-both -6.1 1.4 -58.2 0.0 0.8 0.0 8.3 3.9 3.8 3.3 0.0 0.2 -8.6 SQIL-E -5.0 0.7 -33.8 7.4 0.9 0.1 9.6 3.3 3.2 2.9 0.1 0.3 -4.2 SQIL-both -5.6 0.5 -58.0 0.4 0.8 0.0 8.2 3.8 3.3 2.3 0.1 0.3 -12.6 Demo DICE 0.4 2.0 31.7 8.9 2.6 0.8 8.1 3.7 4.3 2.4 0.1 0.5 7.9 DWBC -0.2 2.5 10.4 5.0 3.7 0.3 36.9 7.4 25.0 6.3 11.6 4.4 14.6 SPRINQL (ours) 73.6 4.3 77.0 5.6 82.9 11.2 72.0 5.3 63.2 6.4 37.7 6.6 67.7 Table 1: Comparison results for three Mujoco and three Panda-gym tasks. expertise levels, we randomly sample (25k-10k-1k) transitions for Mujoco tasks and (10k-5k-100) transitions for Panda-gym tasks for every seed (details of these three-level dataset are provided in Appendix B.4). Table 1 shows comparison results across 3 Mujoco tasks and 3 Panda-gym tasks (the full results for all the nine environments are provided in Appendix C.1). In general, SPRINQL significantly outperforms other baselines on all the tasks. 4.3 Ablation Study - No Distribution Matching and No Reward Regularizer We aim to assess the importance of the distribution matching and reward regularizer terms in our objective (Q2). To this end, we conduct an ablation study comparing SPRINQL with two variants: (i) no Reg-SPRINQL, derived by removing the reward regularizer term from (6), and (ii) no DMSPRINQL, obtained by removing the distribution matching term from (6). Here, we note that the no DM-SPRINQL performs Q-learning using the reward reference function, this can viewed as an adaption of the T-REX algorithm [5] to our offline setting. The conservative Q-learning term is employed in the SPRINQL and the two variants to enhance stability. The comparisons for N = 2 and N = 3 on five Mujoco tasks are shown in Figure 1 (the full comparison results for all tasks are provided in the appendix). These results clearly show that SPRINQL outperforms the other variants, indicating the value of both terms in our objective function. 2 3 number of levels 2 3 number of levels 2 3 number of levels 2 3 number of levels 2 3 number of levels no Reg-SPRINQL no DM-SPRINQL SPRINQL expert Figure 1: Comparison of three variants of SPRINQL across five Mujoco environments. 4.4 Other Experiments Experiments addressing the other questions are provided in the appendix. Specifically, Sections C.1 and C.2 provide full comparison results for all the Mujoco and Panda-gym tasks for three and two datasets (i.e., N = 3 and N = 2), complementing the answer to Q1. Section C.3 provides the learning curves of the three variants considered in Section 4.3 above (answering Q2). Section C.4 provides experiments to answer Q3 (what would happen if we augment the expert dataset?) and Section C.5 addresses Q4 (what would happen if we augment the sub-optimal dataset?). Section C.6 experimentally shows the impact of the conservative term in our approaches (i.e., Q5). Section C.7 reports the performance of SPRINQL with varying numbers of expertise levels N (i.e., Q6). Section C.8 addresses Q7, and Section C.10 shows how SPRINQL performs in terms of reward recovering (i.e. Q8). In addition, Section C.11 reports the distributions of the reference rewards and Section C.12 provides α choosing range. Concretely, our extensive experiments reveal the following: (i) SPRINQL outperforms other baselines with two, three, or even larger numbers of datasets; (ii) the conservative term, distribution matching, and reward regularizer terms are essential to our objective all three significantly contribute to the success of SPRINQL; (iii) the preference-based weight learning provides good estimates for the weights wi; and (iv) SPRINQL performs well in recovering rewards, showing a high positive correlation with the ground-truth rewards, justifying the use of our method for IRL. 5 Conclusion and Limitations (Conclusion) We have developed SPRINQL, a novel non-adversarial inverse soft-Q learning algorithm for offline imitation learning from expert and sub-optimal demonstrations. We have demonstrated that our algorithm possesses several favorable properties, contributing to its well-behaved, stable, and scalable nature. Additionally, we have devised a preference-based loss function to automate the estimation of reward reference values. We have provided extensive experiments based on several benchmark tasks, demonstrating the ability of our SPRINQL algorithm to leverage both expert and non-expert data to achieve superior performance compared to state-of-the-art algorithms. (Limitations) Some limitations of this work include: (i) SPRINQL (and other baselines) still requires a large amount of sub-optimal datasets with well-identified expertise levels to learn effectively, (ii) there is a lack of theoretical investigation on how the sizes of the expert and non-expert datasets affect the performance of Q-learning, which we find challenging to address, and (iii) it lacks a theoretical exploration of how the reward regularizer term enhances the distribution matching term when expert samples are low this question is relevant and interesting but also challenging to address. These limitations will pave the way for our future work. Acknowledgement This research is supported by the National Research Foundation Singapore and DSO National Laboratories under the Al Singapore Programme (AISG Award No: AISG2-RP-2020-016) and Lee Kuan Yew Fellowship awarded to Pradeep Varakantham. [1] Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, page 1, 2004. [2] Anurag Ajay, Yilun Du, Abhi Gupta, Joshua B Tenenbaum, Tommi S Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision making? In The Eleventh International Conference on Learning Representations, 2022. [3] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [4] Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324 345, 1952. [5] Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. In International conference on machine learning, pages 783 792. PMLR, 2019. [6] Daniel S Brown, Wonjoon Goo, and Scott Niekum. Better-than-demonstrator imitation learning via automatically-ranked demonstrations. In Conference on robot learning, pages 330 359. PMLR, 2020. [7] Letian Chen, Rohan Paleja, and Matthew Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. In Conference on robot learning, pages 1262 1277. PMLR, 2021. [8] Robert Dadashi, Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. Primal wasserstein imitation learning. In ICLR, 2021. [9] Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, and Sergey Levine. Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33:13049 13061, 2020. [10] Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. ar Xiv preprint ar Xiv:1710.11248, 2017. [11] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587 1596. PMLR, 2018. [12] Quentin Gallouédec, Nicolas Cazin, Emmanuel Dellandréa, and Liming Chen. pandagym: Open-source goal-conditioned environments for robotic learning. ar Xiv preprint ar Xiv:2106.13687, 2021. [13] Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. Advances in Neural Information Processing Systems, 34:4028 4039, 2021. [14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. [15] Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 2023. [16] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neural information processing systems, 29, 2016. [17] Huy Hoang, Tien Mai, and Pradeep Varakantham. Imitate the good and avoid the bad: An incremental approach to safe reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 12439 12447, 2024. [18] Minh-Huy Hoang, Long Dinh, and Hai Nguyen. Learning from pixels with expert observations. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1200 1206, 2023. [19] Ozsel Kilinc and Giovanni Montana. Reinforcement learning for robotic manipulation using simulated locomotion demonstrations. Machine Learning, pages 1 22, 2022. [20] Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, and Kee-Eung Kim. Lobsdice: Offline learning from observation via stationary distribution correction estimation. Advances in Neural Information Processing Systems, 35:8252 8264, 2022. [21] Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, Hyeong Joo Hwang, Hongseok Yang, and Kee-Eung Kim. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In International Conference on Learning Representations, 2021. [22] Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. Imitation learning via off-policy distribution matching. In International Conference on Learning Representations, 2019. [23] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179 1191, 2020. [24] Wenjun Li and Pradeep Varakantham. Generalization through diversity: Improving unsupervised environment design. International Joint Conference on Artificial Intelligence, 2023. [25] Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. Advances in neural information processing systems, 32, 2019. [26] Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. Algaedice: Policy gradient from arbitrary experience. ar Xiv preprint ar Xiv:1912.02074, 2019. [27] Mila Nambiar, Supriyo Ghosh, Priscilla Ong, Yu En Chan, Yong Mong Bee, and Pavitra Krishnaswamy. Deep offline reinforcement learning for real-world treatment optimization applications. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4673 4684, 2023. [28] Aniruddh Raghu, Matthieu Komorowski, Imran Ahmed, Leo Celi, Peter Szolovits, and Marzyeh Ghassemi. Deep reinforcement learning for sepsis treatment. ar Xiv preprint ar Xiv:1711.09602, 2017. [29] Siddharth Reddy, Anca D Dragan, and Sergey Levine. Sqil: Imitation learning via reinforcement learning with sparse rewards. ar Xiv preprint ar Xiv:1905.11108, 2019. [30] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627 635. JMLR Workshop and Conference Proceedings, 2011. [31] Fumihiro Sasaki and Ryota Yamashina. Behavioral cloning from noisy demonstrations. In International Conference on Learning Representations, 2020. [32] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pages 5026 5033. IEEE, 2012. [33] Yunke Wang, Chang Xu, Bo Du, and Honglak Lee. Learning to weight imperfect demonstrations. In International Conference on Machine Learning, pages 10961 10970. PMLR, 2021. [34] Wei-Hung Weng, Mingwu Gao, Ze He, Susu Yan, and Peter Szolovits. Representation and reinforcement learning for personalized glycemic control in septic patients. ar Xiv preprint ar Xiv:1712.00654, 2017. [35] Yifan Wu, G. Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. Ar Xiv, abs/1911.11361, 2019. [36] Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, Voot Tangkaratt, and Masashi Sugiyama. Imitation learning from imperfect demonstration. In International Conference on Machine Learning, pages 6818 6827. PMLR, 2019. [37] Haoran Xu, Xianyuan Zhan, Honglei Yin, and Huiling Qin. Discriminator-weighted offline imitation learning from suboptimal demonstrations. In Proceedings of the 39th International Conference on Machine Learning, pages 24725 24742, 2022. [38] Mengjiao Yang, Sergey Levine, and Ofir Nachum. Trail: Near-optimal imitation learning with suboptimal data. In International Conference on Learning Representations, 2021. [39] Lantao Yu, Tianhe Yu, Jiaming Song, Willie Neiswanger, and Stefano Ermon. Offline imitation learning with suboptimal demonstrations via relaxed distribution matching. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 11016 11024, 2023. [40] Chi Zhang, Sanmukh Rao Kuppannagari, and Viktor K. Prasanna. Brac+: Improved behavior regularized actor critic for offline reinforcement learning. Ar Xiv, abs/2110.00894, 2021. [41] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pages 1433 1438. Chicago, IL, USA, 2008. A Missing Proofs We provide proofs for the theoritical results claimed in the main paper. Proposition 3.1 J(Q, π) is concave in Q but is not convex in π. Proof. We recall that J(Q, π) = X i [N] wi Eρi[T π[Q](s, a))] Eρπ[T π[Q](s, a))] Eρπ[log π(s, a)] αEρU [(T π[Q](s, a)) r(s, a))2] (9) where T π[Q](s, a)) = Q(s, a) γEs P (s |s,a)[V π(s )] and V π(s) = Ea π(a|s)[Q(s, a) log π(a|s)]. We see that T π[Q](s, a) is linear in Q for any (s, a). Thus, the first and second terms of J(Q, π) in (9) are linear in Q. The last term of (9) involves a sum of squares of linear functions of Q, which are convex. So, J(Q, π) is concave in Q. To see that J(Q, π) is generally not convex in π, we will consider a quadratic component of the reward regularization term (T π[Q](s, a)) r(s, a))2 and show that there is an instance of Q and r values that makes this term convex. We first write: T π[Q](s, a)) r(s, a) = Q(s, a) γ X s P(s |s, a)V π(s ) r(s, a) = Q(s, a) γ X s P(s |s, a) X a π(a |s )(Q(s , a ) log π(a |s )) r(s, a) For simplification, let us choose Q(s , a ) = 0 for all s such that P(s |s, a) > 0. This allows us to simplify T π[Q](s, a)) r(s, a) as T π[Q](s, a)) r(s, a) = γ X s P(s |s, a) X a π(a |s ) log π(a |s ) + Q(s, a) r(s, a) We further see that, for any s S, P a A π(a|s) log π(a|s) achieves its minimum value at π(a|s) = 1/|A| for all a |A|, and P a A π(a|s) log π(a|s) log 1/|A| for any policy π. As a result we have: γ X s P(s |s, a) X a π(a |s ) log π(a |s ) γ log 1 So if we select Q(s, a) such that Q(s, a) r(s, a) γ log |A| 0, then T π[Q](s, a)) r(s, a) 0 for any π. Now we consider the quadratic function Γ(π) = (λ(π))2 where λ(π) = T π[Q](s, a)) r(s, a). Since each term π(a ) log π(a |s ) is convex in π, λ(π) is convex in π. To show Γ(π) is convex in π, we will show that for any two policies π1, π2 and α [0, 1], Γ(απ1 + (1 α)π2) αΓ(π1) + (1 α)Γ(π). To this end, we write αΓ(π1) + (1 α)Γ(π) (a) (αλ(π1) + (1 α)λ(π2))2 (b) (λ(απ1 + (1 α)π2))2 = Γ(απ1 + (1 α)π2) where (a) is because the function h(t) = t2 is convex in t, and (b) is because (i) αλ(π1) + (1 α)λ(π2) λ(απ1 + (1 α)π2) (as λ(π) is convex in π) (ii) αλ(π1) + (1 α)λ(π2) and λ(απ1 + (1 α)π2) are both non-negative, and function h(t) = t2 is increasing for all t 0. So, we see that with the Q values chosen above, function (T π[Q](s, a)) r(s, a))2) is convex and α(T π[Q](s, a) r(s, a))2 is concave. So, intuitively, when α is sufficiently large, J(Q, π) would be almost concave (so not convex), which is the desired result. Proposition 3.2 J(Q, π) may not necessarily be minimized at πQ such that πQ = argmaxπ V π(s), for all s S. Proof. We first write J(Q, π) as J(Q, π) = X i [N] wi Eρi[T π[Q](s, a))] Eρπ[T π[Q](s, a))] Eρπ[log π(s, a)] αEρU [(T π[Q](s, a)) r(s, a))2] i [N] wi Eρi[Q(s, a) γEs V π(s )] (1 γ)Es0V π(s0) αEρU [(Q(s, a) r(s, a) γEs V π(s ))2] (10) We then see that the terms Eρi[Q(s, a) γEs V π(s )] and γEs0V π(s0) are minimized (over π) when V π(s), for all s, are maximized. We will prove that it would not be the case for the last term. Let us choose Q and r such that Q(s, a) r(s, a) > γEs V πQ(s ). We see that for any policy π = πQ, we have Q(s, a) r(s, a) > γEs V πQ(s ) Es V π(s ) Thus, Q(s, a) r(s, a) V π(s ) Q(s, a) r(s, a) V πQ(s ) > 0 which implies that αEρU [(Q(s, a) r(s, a) γEs V π(s ))2] αEρU [(Q(s, a) r(s, a) γEs V πQ(s ))2] So the last term of (10) would not be minimized at π = πQ. In fact, in the above scenario, this last term will be maximized at π = πQ. As a result, there is always α sufficiently large such that the last term significantly dominates the other terms and J(Q, π) is not minimized at π = πQ. Proposition 3.3 For any Q 0, we have bΓ(Q) Γ(Q) and max Q 0 bΓ(Q) max Q 0 Γ(Q). Mover, Γ(Q) = bΓ(Q) if Q(s, a) r(s, a) for all s, a. Proof. We first write b H(Q, π) as b H(Q, π) = X i [N] wi Eρi[T π[Q](s, a))] Eρπ[T π[Q](s, a))] Eρπ[log π(s, a)] (Q(s, a) r(s, a))2 + (Es V π(s ))2 + 2Re LU(r(s, a) Q(s, a))Es V π(s) Since Q 0, V π(s) = Ea π(.|s)[Q(s, a) log π(a|s)] 0. Thus 2Re LU(r(s, a) Q(s, a))Es V π(s) 2(r(s, a) Q(s, a))Es V π(s). As a result, the last term of b H(Q, π) is bounded as (Q(s, a) r(s, a))2 + (Es V π(s ))2 + 2Re LU(r(s, a) Q(s, a))Es V π(s) (Q(s, a) r(s, a))2 + (Es V π(s ))2 2(Q(s, a) r(s, a))Es V π(s) = αEρU [(T π[Q](s, a) r(s, a))2] It then follows that b H(Q, π) J(Q, π). Thus, minπ b H(Q, π) minπ J(Q, π) or bΓ(Q) Γ(Q). Mover, we see that if r(s, a) Q(s, a) for all (s, a), then 2Re LU(r(s, a) Q(s, a))Es V π(s) = 2(r(s, a) Q(s, a))Es V π(s), implying that b H(Q, π) = J(Q, π) and bΓ(Q) = Γ(Q). This completes the proof. Theorem 3.4 For any Q 0, the following results hold (i) The inner minimization problem minπ b H(Q, π) has a unique optimal solution π such that πQ = argminπV π(s) for all s S and πQ(a|s) = exp(Q(s, a)) P a exp(Q(s, a)). (ii) maxπ V π(s) = log(P a exp(Q(s, a))) def = V Q(s). (iii) bΓ(Q) is concave for Q 0 Proof. We first rewrite the formulation of b H(Q, π) as b H(Q, π) = X i [N] wi Eρi[Q(s, a) γEs V π(s )] (1 γ)Es0V π(s0) (Q(s, a) r(s, a))2 + (Es V π(s ))2 + 2Re LU(r(s, a) Q(s, a))Es V π(s) We then see that the first and second term of b H(Q, π) are minimized when V π(s) are minimized, i.e., at π = πQ. For the last term, since V π(s) 0 (because Q 0), (Es V π(s ))2 and 2Re LU(r(s, a) Q(s, a))Es V π(s) are also minimized at π = πQ. So, b H(Q, π) is minimized at π = πQ as desired. (ii) is already proved in [13]. For (iii), we rewrite b H(Q, π) as b H(Q, π) = X i [N] wi Eρi[T π[Q](s, a)] (1 γ)Es0,a π(a|s0)[Q(s0, a) log π(a|s0)] (r(s, a) Q(s, a)) + Es V π(s))2 # (min{0, r(s, a) Q(s, a)) i [N] wi Eρi[T π[Q](s, a)] (1 γ)Es0,a π(a|s0)[Q(s0, a) log π(a|s0)] (r(s, a) Q(s, a)) + Es V π(s))2 # (min{0, r(s, a) Q(s, a)) Then, the first and second terms of (12) is linear in Q. The fourth term is concave. For third term, let Φ(Q) = |r(s, a) Q(s, a)| + Es V π(s). We see that Φ(Q) 0 for any Q 0 and Φ(Q) is convex in Q (because V π(s) is linear in Q). It then follows that, for any η [0, 1] and Q1, Q2 0, we have η(Φ(Q))2 + (1 η)(Φ(Q))2 (a) (ηΦ(Q) + (1 η)Φ(Q))2 (b) (Φ(ηQ1 + (1 η)Q2))2 (13) where (a) is due to the fact function h(t) = t2 is convex, and (b) is because h(t) = t2 is nondecreasing for all t 0, and Φ(Q) is convex and always takes non-negative values. The last inequality in(13) implies that (Φ(Q))2 is convex in Q. So the last term of (12) is concave in Q. Putting all together we conclude that b H(Q, π) is concave in Q as desired. Proposition 3.5 L(r) is strictly convex in r. Proof. We first write L(r) as (s,a),(s ,a ) Di (r(s, a) r(s , a ))2 X h,k [N],h w2 > . . . > w N and P i [N] wi = 1. The comparison results are shown in Figure 17. 0.0 0.2 0.4 0.6 0.8 1.0 1.0 Uniform W Reduced W Increased W auto W Expert 0.0 0.2 0.4 0.6 0.8 1.0 1.0 Uniform W Reduced W Increased W auto W Expert Figure 17: Experiment results for SPRINQL with different manual selection of weight W. C.9 D4RL mujoco dataset In this section, we present additional experiments using the official D4RL dataset. Unfortunately, since our algorithm requires meaningful demonstrations, we exclude the Random dataset and are only able to test with N = 2 (Medium and Expert datasets). The detailed results are shown in Figure 18. 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 Demo DICE DWBC BC-all SPRINQL Expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 Demo DICE DWBC BC-all SPRINQL Expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 Demo DICE DWBC BC-all SPRINQL Expert 0.0 0.2 0.4 0.6 0.8 1.0 Steps 1e6 Demo DICE DWBC BC-all SPRINQL Expert Figure 18: Performance in D4RL dataset with Medium (25,000 transitions) and Expert(1,000 transitions. The results are reported from 5 seeds per method. C.10 Reward Recovering In this experiment, we want to answer (Q8) - How does SPRINQL perform in recovering the groundtruth reward function? Compared to BC-based algorithms, one notable advantage of Q-learning based algorithms is their ability to recover the reward function. Here, we present experiments demonstrating reward function recovery across five Mu Jo Co tasks, comparing recovered rewards to the actual reward function. To achieve this, we introduce increasing levels of random noise to the actions of a trained agent and observe its interactions with the environment. We collect the state, action, and next state for each trajectory, then predict the recovered reward and compare it to the true reward from the environment. For the sake of comparison, we include no Reg-SPRINQL, which can be considered an an adaption of IQ-learn [13] to our setting, and no DM-SPRINQL, which is in fact an adaption of T-REX to our offline setting. Comparison results are presented in Figure 19. We observe a linear relationship between the true and predicted rewards for SPRINQL across all testing tasks, whereas the other approaches fail to return correct relationships for some tasks. no Reg-SPRINQL 5000 10000 True Return Predicted Return no DM-SPRINQL 2500 5000 7500 10000 True Return Predicted Return 2500 5000 7500 10000 True Return Predicted Return no Reg-SPRINQL 0 2000 4000 True Return Predicted Return no DM-SPRINQL 0 2000 4000 True Return Predicted Return 0 2000 4000 True Return Predicted Return no Reg-SPRINQL 0 1000 2000 3000 4000 True Return Predicted Return no DM-SPRINQL 0 1000 2000 3000 4000 True Return Predicted Return 0 1000 2000 3000 4000 True Return Predicted Return no Reg-SPRINQL 0 1000 2000 3000 True Return Predicted Return no DM-SPRINQL 0 1000 2000 3000 True Return Predicted Return 0 1000 2000 3000 True Return Predicted Return no Reg-SPRINQL 0 2500 5000 7500 True Return Predicted Return no DM-SPRINQL 0 2500 5000 7500 True Return Predicted Return 0 2000 4000 6000 8000 True Return Predicted Return Figure 19: Recovered return and the true return of five Mujoco environments. C.11 Reference Reward Distribution From the experiment reported in Table 1, we plot the distributions of the reward reference values in Figure 20, where the x-axis shows the level indexes and the y-axis shows reward values. The rewards seem to follow desired distributions, with larger rewards assigned to higher expertise levels. Moreover, rewards learned for expert demonstrations are consistently and significantly higher and exhibit smaller variances compared to those learned for sub-optimal transitions. Figure 20: Whisker plots illustrate the reward reference distribution of three datasets of each environment for one seed. C.12 Different alpha ablation study As our objective is a combination of two terms with a balancing parameter α, we conduct additional experiments to evaluate the performance of our method across a range of α values. The detailed results are presented in Figure 21. Overall, the results indicate that α can be selected within the range of 0.1 to 10 to achieve good performance. Cheetah (N=3) 0 0.01 0.1 1 10 100 +oo Alpha SPRINQL Expert 0 0.01 0.1 1 10 100 +oo Alpha SPRINQL Expert Cheetah (N=2) 0 0.01 0.1 1 10 100 +oo Alpha SPRINQL Expert 0 0.01 0.1 1 10 100 +oo Alpha SPRINQL Expert Figure 21: Performance of SPRINQL in different α. The results are reported from 5 seeds per α value. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: Our abstract includes our main claims reflecting our main contributions and finding. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We have a discussion on the limitations of our work in the conclusion section. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: All the proofs of the theorems and propositions stated in the main paper are provided in the appendix with clear references. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We provide details on the environments and hyper-parameter settings in the appendix. We also uploaded our source code for re-productivity purposes. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We have describe how to generate our data as well as provide it along with our submitted source code with sufficient instructions for their use. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We have detailed these information in the main paper and the appendix of our paper. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We have reported the mean scores and standard deviations for the result tables. We have also shown training curves constructed from mean scores and shaded by standard error. All the experiments are reported with multiple training seeds as well as different datasets. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We have provided these information in the Hyper parameter and Experimental Implementations section in our appendix. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: The paper provides a general offline imitation learning with multiple expert levels and only testing on the simulated environments. As such, we do not foresee any direct societal impact. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Our training data are generated from open source simulated environments which have no risk for misuse. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We have provided clear citations to the source code and data we used in the paper. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: Our source code is submitted alongside the paper, accompanied by sufficient instructions. We will share the code publicly for re-producibility or benchmarking purposes. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: We have no crowdsourcing experiments. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: We do not have study participants. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.