# robust_preference_optimization_through_reward_model_distillation__2157b092.pdf Published in Transactions on Machine Learning Research (2/2025) Robust Preference Optimization through Reward Model Distillation Adam Fisch fisch@google.com Google Deep Mind Jacob Eisenstein jeisenstein@google.com Google Deep Mind Vicky Zayats vzayats@google.com Google Deep Mind Alekh Agarwal alekhagarwal@google.com Google Research Ahmad Beirami beirami@google.com Google Deep Mind Chirag Nagpal chiragnagpal@google.com Google Research Peter Shaw petershaw@google.com Google Deep Mind Jonathan Berant joberant@google.com Google Deep Mind Reviewed on Open Review: https: // openreview. net/ forum? id= E2z KNuw NDc Language model (LM) post-training (or alignment) involves maximizing a reward function that is derived from preference annotations. Direct Preference Optimization (DPO) is a popular offline alignment method that trains a policy directly on preference data without the need to train a reward model or apply reinforcement learning. However, the empirical evidence suggests that DPO typically assigns implicit rewards that overfit, and trend towards infinite magnitude. This frequently leads to degenerate policies, sometimes causing even the probabilities of the preferred generations to go to zero. In this work, we analyze this phenomenon and use distillation to get a better proxy for the true preference distribution over generation pairs: we train the LM such that its induced implicit reward, i.e., the scaled log-likelihood ratio of the model to the reference model, matches an explicit reward model trained on the preference data. Moreover, to account for uncertainty in the reward model we are distilling from, we optimize against a family of reward models that, as a whole, is likely to include at least one reasonable proxy for the preference distribution. Our results show that distilling from such a family of reward models leads to improved robustness to distribution shift in preference annotations, while preserving the simple supervised nature of DPO. Core contributor. Published in Transactions on Machine Learning Research (2/2025) 1 Introduction Language model (LM) post-training (or alignment) aims to steer language model policies towards responses that agree with human preferences. Early state-of-the-art approaches have focused on reward learning from human feedback. In this paradigm, preference annotations are used to train reward models, which then guide the optimization of the language model policy through online reinforcement learning (an approach broadly referred to as RLHF). Recent research on offline Direct Preference Optimization (DPO; Rafailov et al., 2023) and extensions thereof (Azar et al., 2024; Tang et al., 2024b; Meng et al., 2024), however, has demonstrated that it is possible to directly optimize policies on the preference data, which (a) bypasses the need for a separate reward model, and (b) uses standard supervised techniques rather than online reinforcement learning, which can be more difficult to optimize. These advantages have led to the the adoption of offline alignment, and in particular offline DPO, as the post-training algorithm of choice in both smaller-scale academic settings as well as larger-scale projects such as Llama 3 (AI@Meta, 2024) and OLMo (Groeneveld et al., 2024). While this direct approach to preference optimization is attractive in its simplicity and efficiency, it also raises questions about the effectiveness and robustness of the resulting policies as well as the broader utility of an explicit reward model beyond online reinforcement learning. In this paper, we argue that explicit reward modeling can, in fact, offer substantial practical and theoretical benefits. In particular, we theoretically show that relying solely on the preference data can be a precarious strategy, with few natural brakes in place to prevent policies trained under the DPO objective from careening offtowards degenerate policies when the preference data exhibits certain idiosyncratic properties. On the other hand, explicit reward models can easily be regularized and understood regardless of whether they are Bradley-Terry models (Bradley and Terry, 1952), margin-based ranking models (Zhao et al., 2023), or any other function that correlates well with human preferences (Lee et al., 2023; Tang et al., 2024b; Swamy et al., 2024). Taking a step back from pure direct preference optimization, we first explore a method that merges the best of both worlds for the offline setting: an efficient reward model distillation algorithm that (i) operates effectively in the offline setting, (ii) makes minimal assumptions about the true, optimal reward we aim to maximize, and (iii) demonstrates greater robustness to the specific distribution of prompt/response data used for policy alignment. Drawing inspiration from prior knowledge distillation techniques (Hinton et al., 2015; Romero et al., 2015; Yang et al., 2019; Furlanello et al., 2018), we use the same change of variables trick employed in DPO to express the language model policy in terms of its implicit reward model (Rafailov et al., 2023). We then train the policy s implicit reward model to match our desired, explicit reward via an L2 loss that directly regresses the pairwise differences in target rewards for any two generation pairs (x, y1) and (x, y2).1 Here we theoretically establish the equivalence between optimizing this simple distillation loss over a sufficiently diverse offline dataset of unlabeled examples, and optimizing the traditional online RLHF objective with reinforcement learning. While this approach adds reward modeling and reward inference back into the pipeline, it still maintains much of the simplicity and efficiency of reward-model-free DPO. Specifically, in our setting, rewards for the training data can be computed offline, once, ahead of time. This computation is completely parallelizable, and reward inference is significantly faster than the autoregressive generation (also known as model rollout) that is done in online settings. Consequently, this allows the policy training framework to be nearly identical to standard DPO, modulo the structure of the data that is fed in. This is true regardless of the size of the policy that is trained (as the policy s implicit reward model does not need to be the same size as the explicit reward model) or its hyper-parameters (as the policy s hyper-parameters do not need to match those of the reward model). Reward model distillation, however, is still subject to some of the same challenges facing DPO-style training. In particular, distillation requires having a reliable reward model but having a reliable reward still requires having a reliable method for extracting a reward model from a potentially noisy preference dataset. To address the uncertainty surrounding what the right reward model to optimize against is, we also introduce a pessimistic extension to our approach. This extension aims to maximize the worst-case improvement of our model across a plausible family of reward models (e.g., those sufficiently consistent with annotated preference data). This strategy aligns with that of existing work in conservative offline reinforcement learning (Cheng 1We also note that a similar L2 reward alignment loss was recently explored in independent work by Mao et al. (2024) and Gao et al. (2024b), giving further support for its effectiveness. See a discussion of related work in 2. Published in Transactions on Machine Learning Research (2/2025) et al., 2022; Kumar et al., 2020). We show that this pessimistic objective can be equivalently expressed and optimized by adding a simple additional KL-divergence regularization to the original distillation objective. Empirically, we find that reward model distillation, particularly pessimistic reward model distillation, leads to similar performance to prior direct preference optimization methods when the preference datasets used are unbiased. When the preference datasets are biased, however, it leads to significantly better performance when compared to DPO and the Identity Preference Optimization (IPO) framework of Azar et al. (2024), which was introduced as a more robust alternative to DPO. To further support these empirical observations, we provide an extensive theoretical analysis that both (i) sheds more light on the degenerative tendencies of DPO and issues inherent to its objective, and (ii) highlights relative advantages of our explicitly regularized approaches. 2 Related work Recent work in offline alignment has focused on DPO (Rafailov et al., 2023) as a simpler alternative for aligning language models from preference data. Subsequent work, however, has identified issues with DPO, including weak regularization (Azar et al., 2024) and a tendency to decrease the probability of winning generations during training (Pal et al., 2024). Similar to some of the findings in this work, Rafailov et al. (2024a) theoretically showed the existence of minima to the empirical DPO loss that assign non-zero probability to outputs not present in the training data, which can lead to unexpected model behaviour. Here we further show that not only can such minima place non-zero probability on outputs not present in the training data, but they can also assign near-zero probability to preferred responses that do appear in the training data. A number of methods have since explored various avenues for combating these issues. These include analyzing the impact of noise on DPO alignment (Gao et al., 2024a), proposing to update the reference policy during training (Gorbatovski et al., 2024), and suggesting a variant of IPO with a per-context margin (Amini et al., 2024). Additional research has focused on token-level alignment methods (Zeng et al., 2024; Rafailov et al., 2024b; Mudgal et al., 2024; Chakraborty et al., 2024) and on developing a unified view of various offline alignment methods (Tang et al., 2024b). Similar to our contributions towards using pessimism with respect to the correct choice of reward model for robust reward model distillation, existing work on offline RLHF has also focused on encompassing various forms of conservative reward penalties (Zhu et al., 2023; Liu et al., 2024; Zhan et al., 2024). Most relevant to our work on a technical level, concurrent work by Mao et al. (2024) also uses an offline loss that leverages targets from a learned reward model for supervision (as opposed to binary preference labels) that has the same form of the simple distillation loss analyzed in 5.1 of this paper, but does not explore pessimism. Similarly, the REBEL algorithm of Gao et al. (2024b) also studies the same reward regression loss, but in the online setting. This work builds upon these findings, and provides further analysis, as well as a solution based on pessimism together with reward distillation. As discussed in 1, offline settings are attractive since they allow for simple, efficient, and scalable training frameworks. At the same time, while offline alignment methods are popular, recent evidence suggests that online alignment methods such as RLHF (Christiano et al., 2017; Stiennon et al., 2020), can still lead to more favorable outcomes (Guo et al., 2024; Tajwar et al., 2024; Dong et al., 2024; Xu et al., 2024; Xiong et al., 2024; Calandriello et al., 2024), especially when high reward outputs have low probability under the base policy. One of the advantages of online settings over offline settings is that many strategies for mitigating over-optimization that are simple to apply in online settings, such as reward shaping, are not as straightforward to apply in offline settings. For example, reward ensembles have been widely investigated recently as a mechanism for tackling reward hacking in online RLHF (Eisenstein et al., 2023; Coste et al., 2023; Zhai et al., 2023; Ramé et al., 2024), and in the context of multi-objective optimization (Moskovitz et al., 2023; Rame et al., 2024). Other notable examples for combating reward over-fitting and optimization by training reward models with regularized training objectives include iterative data smoothing (Zhu et al., 2024), which uses a trained model to softly label data during RLHF, and reward calibration from demonstrations (Rita et al., 2024). This work addresses some of the methodological and experimental gap that exists between online and offline methods for RLHF, by allowing for explicitly designed, trained, and regularized reward models (or pessimistic reward model ensembles) to be added back into the offline alignment setting without losing the practical benefits of the offline setting. Also relevant to our work, Moskovitz et al. (2023) focus on composite rewards in the online setting, with the goal of achieving high task reward while ensuring that every individual Published in Transactions on Machine Learning Research (2/2025) component is above some threshold also by applying a Lagrangian relaxation. In this work, we also consider multiple reward models, but we only focus on cases where there is no known, obvious reward decomposition. Finally, the question of using a small amount of offline data to learn high-quality policies, instead of online access to reward feedback, has also been widely studied in the offline reinforcement learning (RL) literature. The predominant approach here is to use pessimism, that is, to learn a policy with the highest reward under all plausible environment models consistent with the data, with an extensive theoretical (Liu et al., 2020; Zanette et al., 2021; Xie et al., 2021) and empirical (Kumar et al., 2020; Cheng et al., 2022; Yu et al., 2021) body of supporting work. The key insight in this literature is that without pessimism, the RL algorithm learns undesirable behaviors which are not explicitly ruled out in the training data, and pessimism provides a robust way of preventing such undesirable extrapolations, while still preserving generalization within the support of the data. 3 Preliminaries We begin with a brief review of Direct Preference Optimization (DPO) (Rafailov et al., 2023). 3.1 The preference alignment problem Let x be an input prompt, y πθ( | x) be the language model policy πθ s response to x, and πref(y | x) a reference policy (such as a pretrained or finetuned language model that is high-performing, but not yet aligned, and often used as the starting point for optimization). Given some reward function r (x, y), the goal of alignment is to solve for the aligned policy πθ (y | x) that maximizes the following RLHF objective, i.e., πθ (y | x) = argmax πθ Eµ(x) Eπθ(y|x)[r (x, y)] βDKL[πθ( | x) πref( | x)] , (1) where µ(x) is a fixed distribution over prompts, and the KL-divergence term keeps the aligned policy close to the anchoring reference policy, πref(y | x). Here, the reward function r is typically not known in advance, but rather inferred from collected human preference data in the form of (x, yw, yℓ), where x is the prompt, yw is the winning , or preferred, response, and yℓis the losing , or dispreferred, response. A common approach is to assume that (y1, y2) follow a Bradley-Terry model (Bradley and Terry, 1952), under which the probability that y1 is preferred to y2 given the reward function r and prompt x is p (y1 y2 | x) = σ(r (x, y1) r (x, y2)), where σ( ) is the sigmoid function and denotes preference. Under this model, we can use the preference data (x, yw, yℓ) Dpref to estimate r via maximum likelihood estimation, i.e., ˆr argmin r E(yw,yℓ,x) Dpref log σ(r(x, yw) r(x, yℓ)) . (2) With ˆr in hand, Eq. (1) can be optimized using standard reinforcement learning algorithms (Schulman et al., 2017; Stiennon et al., 2020; Christiano et al., 2017). 3.2 Direct preference optimization DPO is a simple approach for offline policy optimization that uses preferences to directly align the language model policy, without training an intermediate reward model. Specifically, DPO leverages the fact that the optimal solution to the KL-constrained objective in (1) takes the form (Korbak et al., 2022) πθ (y | x) = 1 Z(x)πref(y | x) exp 1 β r (x, y) , (3) where Z(x) = P y πref(y | x) exp( 1 β r (x, y)) is the partition function. DPO reparameterizes the true reward function r in terms of the optimal policy πθ that it induces, i.e., r (x, y) = β log πθ (y | x) πref(y | x) + β log Z(x). (4) Under the Bradley-Terry model, the likelihood that y1 y2 can then be written as p (y1 y2 | x) = σ β log πθ (y1 | x)πref(y2 | x) πθ (y2 | x)πref(y1 | x) Published in Transactions on Machine Learning Research (2/2025) where now πθ can be directly estimated on Dpref following the objective in (2), in place of the intermediate reward model ˆr, i.e., πˆθ(y | x) argminπθ Ldpo(πθ; Dpref) where Ldpo(πθ; Dpref) = E(yw,yℓ,x) Dpref log σ β log πθ(yw | x)πref(yℓ| x) πθ(yℓ| x)πref(yw | x) As described in 1, optimizing Ldpo offers two main advantages over using online RL for Eq. (1): (a) there is no need for a separate reward model, and (b) Ldpo is a supervised objective that can be trained offline, which allows for a simpler training setup than online learning. Still, Ldpo also has certain pitfalls, as we analyze next. 4 Pitfalls of direct preference optimization As argued by Azar et al. (2024), DPO strongly relies on the Bradley-Terry assumption, which leads to surprising and undesirable consequences when trained on finite preference data. The root issue is that if we have any two responses y1 and y2 where p (y1 y2 | x) = 1, then the Bradley-Terry model dictates that r (y1) r (y2) = + , and therefore πθ (y2 | x) = 0 for any finite KL-regularization strength β. We can illustrate this phenomenon on a broader level with the following example: Assumption 1. Suppose we are given a preference dataset of (context-free) pairs Dpref = {(yw i , yℓ i)}n i=1, the pairs (yw i , yℓ i) are mutually disjoint in both the elements. Further suppose that we optimize the DPO objective on Dpref with a single parameter θy for each y. Proposition 1. Under Assumption 1, for any (y, y ) such that y = yw i and y = yℓ i for some i, we have πθ (y)πref(y ) πθ (y )πref(y) , for all global minimizers πθ of the DPO objective in (6), for any β > 0. Corollary 1. Under Assumption 1, further assume that 0 < πref(y) < 1 for all y. Then πθ is a global minimizer of the DPO objective in (6) iffπθ (C(yℓ)c) 1 with πθ (yw i ) > 0 i [n], where C(yℓ)c is the complement of the set of all responses y that appear as a dispreferred yℓ i for any i [n]. Additional analysis of the training dynamics of DPO is provided in 7. A significant implication of this result is that the set of global optima of the DPO loss includes policies that can shift nearly all probability mass to responses that never appear in the training set and even assign near-zero probability to all of the training data responses that do in fact correspond to winning generations, yw, a phenomenon that has been observed empirically and analyzed theoretically (Pal et al., 2024; Rafailov et al., 2024a;b; Tajwar et al., 2024).2 Stated differently, Corollary 1 implies that any θ merely satisfying πθ (yℓ i) = 0 with πθ (yw i ) > 0 i [n] is a global minimizer of the DPO objective in this setting. Though simplistic, the scenario in Assumption 1 is closer to reality than might first be appreciated: in many practical situations we can expect the finite-sample preference data to contain one (or at most a few) preference annotations per example (x, y1, y2), while the policies πθ can have billions of parameters ( n). It is important to note that the supposed regularization term β does not help: it can limit the speed at which the optimizer reaches the degenerate solution, but it cannot alter the final destination. This issue could be viewed as a classic instance of overfitting but as opposed to overpredicting responses within the training set, we might overfit to almost never producing anything like the good responses that do appear within the training set. Furthermore, without additional regularization (beyond β), we can expect this degeneration to occur in typical preference datasets. Takeaways: Pitfalls of direct preference optimization The DPO objective only requires that the likelihood of the preferred response is relatively higher than that of the dispreferred response. A peculiarity of this objective is that when preference data is disjoint (i.e., preferred responses never appear as dispreferred responses, and vice versa), certain types of policies (e.g., over-parameterized) will learn to assign 0 probability to all dispreferred responses, with merely non-zero probability to all preferred responses. This includes policies that assign near-zero probability to preferred responses, and place all mass on (often degenerate) generations outside the training set. 2In fact, the same conclusions can also be inferred from Proposition 1 in the concurrent work of Rafailov et al. (2024a). While their assumptions are slightly different, similarly to us, they also target scenarios in which the training data exhibits certain (realistic) deficiencies. Published in Transactions on Machine Learning Research (2/2025) 4.1 Case study: degenerate DPO optima in a Bo W model For intuition on why the DPO global optima can include policies where π(yw) may be nearly 0 for all yw in the training set, consider the simplified case where the policy is a bag-of-words model, πθ(y) exp (c(y) θ) for c(y) representing a vector of counts in y and θi representing the unnormalized log-probability of token i. Then we can formally show that DPO optimization monotonically decreases an upper bound on the probability of the preferred completion, πθ(t 1)(yw) πθ(t)(yw) πθ(t)(yw). Proposition 2. Let yw, yℓ Vn be preferred versus dispreferred outputs of length n, respectively, with πref(yw), πref(yℓ) > 0 and corresponding count vectors c(yw), c(yℓ). Let log πθ(y) = c(y) θ n Z(θ) for Z(θ) = log PV i eθi, with upper bound log πθ(y) = c(y) θ n maxj θj. Let θ(t) represent the parameters of π after t steps of gradient descent on Ldpo({yℓ, yw, x}), with θ(0) = 0. Then, we have that πθ(t)(yw) πθ(t)(yw) πθ(t 1)(yw) for all t, with strict inequality when ||c(yw) c(yℓ)||0 > 1. If πθ(t)(yw) decreases in t, what other strings become more probable? In the following proposition, we show that under the bag-of-words model, DPO optimization moves probability mass away from yw to sequences that contain only the tokens that maximize the difference between yw and yℓ. Proposition 3. Let yw and yℓbe preferred versus dispreferred outputs of length n. Let = c(yw) c(yℓ) be the difference in unigram counts. Let ˆy = [i, i, . . . , i], for i arg max , with c(ˆy) 1 = n. Then πθ(t)(yw) πθ(t)(ˆy) = τ(t)k for some k 0 and some non-decreasing τ : Z+ R+. We have k = 0 when c(yw) = c(ˆy), and k 0 when c(yw) 2 c(ˆy) 2 = n (when c(yw) is dense) and 2 (when is sparse). This implies that when yw and yℓare similar, πθ(yw) will degrade more rapidly. Early stopping will therefore have to trade offbetween reaching the degenerate solution on such cases, and underfitting other cases in which yw and yℓare more distinct. Related findings are also reported in Razin et al. (2024). Takeaways: Bo W model case study For a simple Bo W model, we show a realized setting in which DPO causes the probabilities of preferred outputs to catastrophically plummet, and instead puts all probability mass on generations that simply maximize token-level differences between preferred and dispreferred examples. 5 Uncertainty-aware reward model distillation As discussed in the previous section, a core issue in preference optimization is that the true preference distribution p (y1 y2 | x) is not known. Attempting to infer it from finite-sample preference data (which may further be biased or out-of-distribution with respect to the target domain) can then result in a failure to learn reasonable policies. In this section, we propose a regularized, pessimistic approach to direct preference optimization that brings explicit reward modeling back into the picture through a model distillation objective, while still maintaining the simplicity and efficiency of offline alignment methods. 5.1 Reward model distillation Suppose for the moment that the reward function r was in fact known, and did not have to be inferred from sampled preference data. Under this setting, we can then construct a straightforward and efficient offline optimization procedure that is similar in spirit to DPO, but no longer relies directly on a preference dataset. Concretely, given unlabeled samples (x, y1, y2) ρ (where the number of samples can be potentially unlimited), we can define a simple squared distillation loss that matches the pairwise differences of the explicit reward r with those of the implicit policy reward defined by πθ, i.e., Ldistill(r , πθ; ρ) = Eρ(x,y1,y2) " r (x, y1) r (x, y2) β log πθ(y1 | x)πref(y2 | x) πθ(y2 | x)πref(y1 | x) Due to symmetry, here it does not matter if (y1, y2) = (yw, yℓ), i.e., preferred vs. dispreferred, or vice versa. Notably, a similar squared-loss objective has also been recently proposed and shown to be effective in Published in Transactions on Machine Learning Research (2/2025) independent work by Mao et al. (2024) and Gao et al. (2024b). We now first provide further motivation for this approach, before extending it to pessimistic variants in the next section. Intuitively, the distillation loss in (7) seeks to exactly match differences in reward model scores across all generation pairs (x, y1, y2). It is easy to see that under the Bradley-Terry model, this is equivalent to matching the strength of the preference relationship, y1 y2. Furthermore, by only matching differences, we can still conveniently ignore the log partition term, log Z(x), in the implicit reward formulation for πθ as shown in (4), as it is constant across different y for any given x. Finally, similar to the motivation in DPO, we can show that minimizing Ldistill(r , πθ; ρ) indeed results in an optimally aligned policy πθ , as long as the data distribution ρ has sufficient support over the space of prompts and responses. Theorem 1. Let Y denote the set of all possible responses for any model πθ. Assume that supp(πref(y | x)) = Y, i.e., the reference policy may generate any outcome with non-zero probability. Further, let supp(ρ(x, y1, y2)) = supp(µ(x)) Y Y. Let πθ (y | x) argminπθ Ldistill(r , πθ; ρ) be a minimizer over all possible policies, of the implicit reward distillation loss in (7), for which r (x, y) is assumed to be deterministic, and finite everywhere. Then for any β > 0, πθ also maximizes the alignment objective in (1). The theorem holds for a broad class of data distributions ρ(x, y1, y2), and makes no assumptions on r . For example, it is no longer necessary for it to be defined using a Bradley-Terry model. Notably, it applies for reward models that are much larger and potentially better than the policy model, as the reward model is not used at test time (in contrast to DPO, which ties the size of the policy model used for generation to the size of the implicit reward model that is trained on preferences). In fact, this result can also be seen as strict generalization of the IPO framework of Azar et al. (2024), which corresponds to the special case r (x, y) 1{y = yw}, if labeled pairs (x, yw, yl) are provided instead of the unlabeled pairs (x, y1, y2). Of course, the true reward r is usually not known in practice. Still, as in standard RLHF, we can construct good proxies by using the preference data to identify plausible target reward models rtgt, further guided by any amount of regularization and inductive bias that we desire. Moreover, prior work has found that such explicitly trained reward models are more accurate and generalize better than the implicit reward model defined in (4) that is learned by DPO (Tang et al., 2024a; Lin et al., 2024). A natural choice is to thus first learn rtgt on the preference data Dpref using standard methods, and then reuse Dpref to distill πθ, which is similar to classical settings in teacher-based model distillation (Hinton et al., 2015; Romero et al., 2015). Furthermore, as rtgt is a real-valued model, at a bare minimum it is guaranteed to induce a regularized Bradley-Terry preference distribution ptgt(y1 y2 | x) > 0, x, y1, y2 X Y Y, and thereby avoid the degeneracies identified in 4 for the maximum likelihood estimate under DPO. Takeaways: Reward model distillation If the true reward r is known, Eq. (7) provides an objective that distills r directly into πθ , where πθ maximizes r as in Eq. (1). When the true reward r is unknown, we can use any form of approximate reward model rtgt. Furthermore, any real-valued rtgt naturally adds regularization, and avoids degenerate optima such as the 0-probability solutions identified in 4. 5.2 Pessimistic reward model distillation Choosing a single reward model rtgt for anchoring the LM policy can naturally still lead to degenerate behavior if rtgt is a poor approximation of the true r that accurately reflects human preferences. However, we can easily extend our framework to handle uncertainty in the right target reward function by defining a confidence set of k 1 plausible target reward models, S = r1 tgt, . . . , rk tgt , and training πθ (y | x) to maximize the following pessimistic form of the objective in (1): max πθ min ri tgt SEµ(x) h Eπθ(y|x)[ri tgt(x, y)] Eπref(y|x)[ri tgt(x, y)] | {z } advantage over the baseline policy βDKL(πθ( | x) πref( | x)) i . (8) In this pessimistic objective we are no longer optimizing πθ for a single reward, but optimizing πθ to produce generations that are scored favorably on average, even by the worst-case reward model in the set S, relative Published in Transactions on Machine Learning Research (2/2025) π2 π3 = π θ KL(πSFT||π3) πSFT(yw) = 0.3 πSFT(yℓ) = 0.3 πMLE(yw) = 1.0 πMLE(yℓ) = 0.0 π3(yw) = 0.5 π3(yℓ) = 0.05 Figure 1: A toy illustration of Theorem 2, which states that the optimal πθ for (8) is the policy in Pβ(S) with the lowest forward-KL from πSFT. The set Pβ(S) contains a (potentially infinite) set of policies π1, π2, . . . corresponding to target reward models. Here, πSFT assigns equal mass to yw and yℓ, πMLE is the MLE solution for the DPO objective, which puts all probability mass on yw, and π3 is the policy in Pβ(S) with lowest forward-KL. to the generations of the baseline policy πref.3 When the set S = {r } consists of only the ground-truth reward, the objective (8) is equivalent to standard RLHF (1), up to a constant offset independent of θ. More generally, whenever S includes a good proxy er for r , the pessimistic advantage evaluation ensures that the policy π θ that maximizes eq. (8) still has a large advantage over πref under all r S, including er. This use of pessimism to handle uncertainty in the knowledge of the true reward is related to similar techniques in the offline RL literature (Kumar et al., 2020; Cheng et al., 2022). For the objective to be meaningful, however, the set S has to be chosen carefully. When S is small, it might not include any good proxy for r . Conversely, if S is too rich, it forces πθ to be nearly identical to πref, since any deviations from πref might be penalized by some reward model in S. Consequently, we want to design S to be the smallest possible set which contains a reasonable approximation to r . Finally, to solve (8), we can reformulate it as an equivalent constrained offline optimization problem, which conveniently admits a similar loss form as (7), as shown below: Theorem 2 (Pessimistic distillation). Define the constrained minimizer πθ (y | x) argmin πθ Pβ(S) βEµ(x)DKL(πref( | x) πθ( | x)), (9) where Pβ(S) is the set of all possible policies with implicit reward models that are consistent with any target reward model ri tgt S, i.e., Pβ(S) {πθi}|S| i=1 where πθi πref(y | x) exp 1 β ri tgt(x, y). Then for any β > 0, πθ also maximizes the pessimistic alignment objective in (8). To unpack this result, Theorem 2 stipulates that the πθ that maximizes the pessimistic objective in (8) is the policy in Pβ(S) that is closest in forward KL-divergence to πref (see Figure 1).4 In addition, this policy also maximizes the expected reward of one of the ri tgt S (minus the additional weighted reverse KL-divergence penalty term). Intuitively, the forward KL-divergence term serves the role of biasing the model towards optimizing for reward models that are similar to the implicit reward that πref already maximizes. Otherwise, there might exist a target reward model ri tgt S for which the advantage of πθ relative to πref will be low, or even negative (a solution that we would like to avoid). 5.2.1 Optimization The constraint in (9) can be relaxed and approximately optimized by introducing an objective with a Lagrangian-style penalty with strength α > 0 on a form of distillation loss as (7), i.e., min πθ βEµ(x)DKL(πref(y | x) πθ(y | x)) + α min ri tgt S Ldistill(ri tgt, πθ; ρ), (10) 3It is useful to optimize the advantage as it cancels the effects of constant differences between reward models in S. We are also free to use any baseline policy πbase; we pick πref for simplicity and ease of analysis in 7. 4Note that the objective in (9) minimizes the forward KL-divergence DKL(πref( | x) πθ( | x)) even though the pessimistic objective in (8) is regularized with reverse KL-divergence DKL(πθ( | x) πref( | x)). Published in Transactions on Machine Learning Research (2/2025) where for convenience we divide by α and instead optimize5 Lpdistill(S, πθ; ρ) = min ri tgt S Ldistill(ri tgt, πθ; ρ) + γEµ(x)DKL(πref( | x) πθ( | x)), (11) where γ = β/α. In reality, minimizing (11) for γ > 0 is equivalent to solving the constrained optimization problem in (9) with an implicitly larger set of possible reward models Sγ S indexed by γ. More specifically, Sγ also contains all reward models r that are approximately consistent with the anchoring reward models ri tgt contained in S, as the following result states. Proposition 4 (Soft pessimistic distillation). Assume the same conditions as Theorem 1. Then for any 0 < γ < , there exists a λ 0 such that πθ (y | x) argminπθ Lpdistill(S, πθ; ρ), where πθ is a minimizer over all possible policies of the objective (9), for the effective reward model set n r: Eρ(x,y1,y2) (ri tgt(x, y1) ri tgt(x, y2) r(x, y1) + r(x, y2))2 λ o . (12) As a result, optimizing (11) even when using the singleton S = {rtgt} yields an implicitly pessimistic objective, in which the pessimism is over all reward models r that are consistent up to λ with rtgt. Takeaways: Pessimistic reward model distillation It can often be unclear what reward model should be used as a target for policy distillation. A natural optimization objective in the face of uncertainty is to optimize for the worst-case reward, out of a set of plausible reward candidates. We show that this can be formulated in a very similar style to the original distillation loss in Eq. (7) by simply adding an additional forward KL penalty term. 5.3 Pessimistic DPO Proposition 4 can also be leveraged to obtain an alternative, implicitly pessimistic, objective that uses DPO directly instead of distillation. Consider the following regularized DPO loss: Lpdpo(πθ; Dpref) = Ldpo(πθ; Dpref) + γEµ(x)DKL(πref(y | x) πθ(y | x)). (13) Following a similar analysis as in Proposition 4, we can derive that this implicitly corresponds to maximizing the pessimistic objective in (8) for the reward model set Sγ = n rπθ : Ldpo(πθ; Dpref) min π θ Ldpo(π θ; Dpref) + λ o , (14) where rπθ(x, y) β log πθ(y | x)/πref(y | x) + β log Z(x) is the implicit reward model defined by πθ. Sγ then corresponds to the set of reward models rπθ that are all approximate minimizers of the DPO loss. This includes not only the MLE, but also all other estimators that obtain nearly the same loss. In principle, this can be expected to help ameliorate some of the issues of 4: since driving the reward to only marginally decreases the Ldpo loss past a certain point, the set S will also include finite reward functions |rπθ(x, y)| < for any γ > 0. These rewards would then be preferred if they induce a policy with a smaller (forward) KL-divergence to πref than the degenerate, infinite rewards. 6 Experimental results The main motivation for reward distillation and pessimism is to increase alignment robustness in challenging settings where it is difficult to learn good policies directly from the preference data. To demonstrate the effectiveness of our approach, we run experiments on the popular TL;DR summarization task (Stiennon et al., 2020; Völske et al., 2017), in which we simulate a scenario where the preference data has a spurious correlation between the length of a summary and whether or not it is preferred.6 Additionally, we show results for an unbiased setting on TL;DR, as well for an unbiased setting on Anthropic Helpfulness (Bai et al., 2022). 5In practice, we also compute and optimize the min over reward models per each mini-batch of examples. 6Length has been repeatedly shown in the past to correlate with reward (Singhal et al., 2023; Park et al., 2024). Published in Transactions on Machine Learning Research (2/2025) 6.1 Experimental setup We first train an oracle reward model on the TL;DR preference data training set (Stiennon et al., 2020) and relabel all preference pairs with this oracle. This enables us to use the oracle reward model for evaluation, without worrying about the gap to true human preferences. After relabeling, longer responses (where longer is defined as y1 having at least 10% more tokens than y2) are preferred in 61% of the examples. To test the effect of a spurious correlation on preference-based policy optimization, we select a training set of 30K examples from the relabeled data such that the longer output is preferred in ρ fraction of examples, with ρ {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. Each such training set is denoted Dρ. At each Dρ, we compare our approach to DPO (Rafailov et al., 2023) and IPO (Azar et al., 2024), which are currently the most commonly used offline alignment methods. We test the following variants of distillation and pessimism: Distilled DPO (d-DPO): Trains a reward model rρ on Dρ, and then optimizes Ldistill(rρ, πθ; ρ). Pessimistic DPO (p-DPO): A pessimistic version of DPO as described in 5.3, trained on Dρ. Pessimistic Distilled DPO (dp-DPO): Combines the above two by training a reward model rρ on Dρ and optimizing the pessimistic distillation objective (Eq. (11)) with confidence set S = {rtgt}. Pessimistic Ensemble DPO (e-DPO): To create ensembles of reward models, we subsample from each Dρ five preference datasets, Dρ,b, at b B = {0.2, 0.4, 0.5, 0.6, 0.8}, such that the fraction of pairs where the longer response is preferred is b, and train reward models rρ,b on those subsets. Consequently, sensitivity to length should vary across ensemble members. We then apply the same procedure as dp-DPO above, with a confidence set Sρ = {rρ,b}B b=1. All reward models and policies are initialized from Palm-2-XS (Anil et al., 2023). Policies also go through a supervised finetuning step on human-written summaries from the original TL;DR training set (Völske et al., 2017) prior to alignment, and we term this policy πSFT. We evaluate performance by sampling summaries for test set prompts, evaluating the average reward according to the oracle reward model, and computing the advantage in average reward compared to πSFT (before alignment). We train policies for 10, 000 steps with batch size 16 and learning rate 10 6, and reward models for 3k steps with batch size 64 and learning rate 4 10 6. We use the validation set for model selection during policy training and to choose the following hyperparameters. For all DPO variants, we sweep over β {.01, .1, 1, 3, 10, 30, 100}. For IPO, we sweep over τ {0.01, 0.1, 1, 3, 5, 10, 25}. For all pessimistic methods we anneal γ = α/β from 10 4 to 10 2 linearly during the 10k training steps (however, in later experiments performed with e-DPO, we found annealing does not affect performance and a constant γ also leads to similar performance, see Figure B.5). 6.2 Results We present the results of our experiment in Figure 2. As can be seen in the plot, the more challenging setting is when ρ < 0.5, which corresponds to a sample of preference annotations in which shorter outputs are generally preferred. This distribution shift is more difficult because as mentioned the oracle reward model (trained on human annotations) has a bias in favor of longer outputs (Singhal et al., 2023). Nevertheless we get sizable improvements compared to the reference policy πSFT for all length bias values. All approaches that invoke distillation (d-DPO, e-DPO, dp-DPO) outperform IPO and DPO (p < .01 by a Wald test) for ρ 0.5, where shorter responses are preferred. Pessimistic ensemble DPO (e-DPO) performs particularly well in these settings, generally outperforming all methods that use a single reward model. When longer responses are preferred (ρ > 0.6), single reward distillation (d-DPO) leads to the highest performance, significantly outperforming both DPO and IPO (p < .01 by a Wald test). Interestingly, p-DPO does not provide empirical benefits relative to the distillation based methods, indicating that the distillation loss itself is quite important. For the effect of hyper-parameter selection, see Figure B.4. In DPO-based methods, the optimal value of β is inversely correlated with the bias; in IPO the same holds for the τ hyperparameter. To better understand the utility of reward ensembles in e-DPO, in particular when ρ < 0.5, we examine the role of each reward model in the ensemble across different biases. Specifically, for e-DPO, we identify for each example, throughout training, the reward model rρ,b that best matches the implicit reward of the current Published in Transactions on Machine Learning Research (2/2025) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 length bias ( ) oracle reward DPO IPO d-DPO dp-DPO e-DPO p-DPO Figure 2: Main results, showing the oracle reward compared to the initial finetuned policy (the oracle reward of the initial finetuned policy is 1). Error bars correspond to bootstrap 95% confidence intervals for finite sample variance. Ensemble DPO (e-DPO) is significantly better than DPO and IPO in the challenging setup where shorter responses are preferred (ρ 0.5), and is generally the best-performing method overall in this regime. Distilled DPO (d-DPO) performs best when longer responses are preferred (ρ > 0.6). policy, i.e., for which reward model is Ldistill minimized on that example (see Eq. (7) and (11)). We find that when the policy is trained on data where shorter preference are preferred (ρ < .5), the reward model that best matches the policy often has the opposite bias (b is high), and vice versa. Thus, the success of e-DPO may be explained by its ability to distill from reward models that do not suffer from the bias in the policy training data, which is particularly helpful when ρ .5 as this bias is also not shared by the oracle RM. We provide the full distribution over reward models for all ρ and β in Appendix B.3. Overall, these results demonstrate the efficacy of training a policy by distilling from a reward model in the presence of distribution shifts, and that a careful design of an ensemble to mitigate spurious correlations can lead to further performance gains.7 6.3 Additional results in an unbiased setting To test the ability of our method to perform well on preference tasks where no bias is present, we next run experiments on the Anthropic Helpfulness dataset (Bai et al., 2022). We use a Gemini 1.0 Ultra (Gemini Team, 2024) LLM-as-a-judge model for evaluating win-rates of the policies over both the SFT starting point and the best DPO baseline. As shown in Table 1, in this unbiased setting our distillation objectives can also provide modest gains. Concretely, e-DPO s win rate against the SFT policy is 65.8%, while DPO s win rate is 64.2%. Moreover, comparing e-DPO and DPO directly, e-DPO wins in 49.7% of the cases, while DPO wins in 46.9% of the cases (the rest are considered to be ties with no preference relation). Takeaways: Experimental results Reward model distillation, specifically pessimistic reward model distillation when an ensemble of reward models is used, leads to improved robustness on tasks where bias is present in the preference data. In addition, reward model distillation results in small improvements in performance even on unbiased settings, making it a simple but compelling algorithmic modification to offline training. 7 Theoretical analysis This section characterizes solutions offered by pessimistic DPO and distillation to the issues identified in 4, focusing on the simplified scenario in which we optimize with respect to a single preference pair (yw, yℓ). 7We note that we also experimented with an ensemble where members are different checkpoints across training of a reward model on the preference data, however, we did not observe any empirical gains from this form of ensemble. Published in Transactions on Machine Learning Research (2/2025) Method %Wins %Ties %Losses e-DPO vs. SFT 65.8 1.3 32.9 d-DPO vs. SFT 65.6 1.0 33.3 DPO vs. SFT 64.2 1.0 34.8 Method %Wins %Ties %Losses e-DPO vs. DPO 49.7 3.4 46.9 d-DPO vs. DPO 48.2 3.6 48.2 Table 1: Side-by-side win rates on the Helpfulness dataset (with a Gemini 1.0 Ultra evaluator). In its Lagrangian formulation, pessimistic DPO adds a forward KL term to the DPO objective ( 5.3). Here we seek to better analyze how this additional term effects the optimal policy. For the sake of analysis, we assume that the preference annotations are sampled from the reference distribution, µ(x) πref(y | x) πref(y | x). Then a finite-sample approximation of the forward KL term is (yw,yℓ) DPref (log πθ(yℓ) + log πθ(yw)). By applying this finite-sample approximation, p-DPO has a finite optimum, unlike DPO, as shown in Proposition 1. Note that this analysis is limited in two ways: (1) as mentioned, we compute the KL term over the completions in the preference data; (2) we directly optimize the probability ratios ψw = πθ(yw)/πref(yw) and ψℓ= πθ(yℓ)/πref(yℓ), rather than optimizing them jointly through the parameters. For sufficiently expressive πθ, however, this approximation captures the behavior of the two algorithms reasonably well. Proposition 5. Let ˆLpdpo represent a finite-sample approximation to Lpdpo with the empirical forward KL term ˆΩ(Θ). For a fixed ˆπθ(yw i ) and α > 1, the argminπθ(yℓ) ˆLpdpo is min 1 ˆπθ(yw i ), ˆπθ(yℓ i) , with log ˆπθ(yℓ i) = 1 β log (α 1) + log ˆπθ(yw i ) + log πref(yℓ i ) πref(yw i ). The optimum in Proposition 5 corresponds to log ψw/ψℓ= β 1 log(α 1). Recall that IPO (Azar et al., 2024) seeks to assign a constant value to this ratio by minimizing (log ψw ψℓ τ 1)2; the (unconstrained) optima are identical for τ 1 := β 1 log(α 1), but the loss surfaces are different (see further analysis of this in 7.2). DPO sets πθ(yℓ i) 0, as shown in Corollary 1; this is due not only to competition from πθ(yw i ) but from DPO penalizing positive probability on yℓ i. Analysis of the distilled loss gives a similar result: Proposition 6. For any fixed ˆπθ(yw i ) and β > 0, the argmin of the distilled DPO objective in (7) is min(1 ˆπθ(yw i ), ˆπθ(yℓ i)), with log ˆπθ(yℓ i) = 1 β (rt(x, yℓ i) rt(x, yw i )) + log ˆπθ(yw i ) + log πref(yℓ i ) πref(yw i ). While the setting is simplistic, the results are comforting: here the additional regularization effects of both distillation and pessimism (in the case of p-DPO) clearly help to avoid degenerate optima. 7.2 Transitive closure: p-DPO vs IPO As pointed out in 7.1, both p-DPO and IPO target a constant ratio for log ψw/ψl. Despite the similar form of optima, however, the loss surfaces of the two objectives differ in notable ways. To see this, we consider a simplified setting with three possible outputs, y1, y2, y3. We observe either D = {(y1 y2), (y2 y3)} or D = D {(y1 y3)}. If we treat this problem as a multi-arm bandit, the goal is to assign a weight to each arm, which we denote ψi = log πθ(yi | x) + Zx, with Zx an underdetermined log-partition function. Proposition 7. Let D = {(i, i + 1) : i 1, 2, . . . , n} for n > 2. Let D be the dataset arising from the transitive closure of D. Assume πref is indifferent to all (yi, yj). Let ψ(D) = maxi ψ(D) i mini ψ(D) i . Then ψ(D) = (n 1)τ 1 > ψ(D) = 2 n 1 Intuitively, the observation of y1 y3 should increase confidence that y3 is superior to y1, but in IPO it has the opposite effect, drawing their scores closer together. While pessimistic DPO also has a target ratio between each preference pair, its loss surface is different: in particular, it does not increase quadratically as we move away from the target. We find empirically that pessimistic DPO is robust to the transitive closure Published in Transactions on Machine Learning Research (2/2025) of preference annotations in the multi-arm bandit setting, as shown in Figure B.2. As discussed above, DPO will set ψ1 because y1 is never preferred. Specifically, in our experiments we solve the p-DPO and IPO objectives for both D = {(y1, y2), (y2, y3)} and D = D {(y1, y3)}, solving with respect to {πθ(yi)}. IPO is solved analytically as a quadratic program; for p-DPO we used projected gradient descent. We consider β (1, 3, 10, 30) and α (5, 10, 20, 50, 100, 1000). As shown in Figure B.2, there are significant differences in the IPO solutions with and without transitive closure, while for p-DPO these differences are imperceptible. Takeaways: Theoretical analysis Both pessimistic DPO and reward model distillation avoid the degenerate optima of DPO that were analyzed in 4 primarily through adding additional sources of regularization to the objective. In addition to these more stable optima, we also show how the loss surface of p-DPO can lead to more favorable outcomes with respect to modeling challenging transitive preferences vs. IPO. 8 Conclusion LM alignment is crucial for deploying safe and helpful assistants, but is difficult due to lack of access to perfect preference oracles. We presented a thorough theoretical analysis of some of the degeneracies that DPO is susceptible to when learning from sampled human preference data. Furthermore, our findings suggest that explicit reward modeling remains a powerful vehicle for introducing regularization into post-training. By distilling the reward assigned by a single, explicit reward model or a family of explicit reward models directly into the implicit reward maximized by our policies using offline data, we demonstrated that we can achieve improved robustness to variations in preference dataset quality, while maintaining the simplicity of offline alignment frameworks. Finally, reward model distillation also results in modest but consistent improvements in performance even on unbiased settings, making it an overall compelling algorithmic modification to offline training. Acknowledgements We thank the anonympous reviewers, Alexander D Amour, and Chris Dyer for helpful comments and feedback on the manuscript. This research also benefited from discussions with Victor Veitch, Mandar Joshi, Kenton Lee, Kristina Toutanova, David Gaddy, Dheeru Dua, Yuan Zhang, Tianze Shi, and Anastasios Angelopoulos. AI@Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. Afra Amini, Tim Vieira, and Ryan Cotterell. Direct preference optimization with an offset. ar Xiv preprint ar Xiv:2402.10571, 2024. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, Ya Guang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Published in Transactions on Machine Learning Research (2/2025) Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report, 2023. Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447 4455. PMLR, 2024. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova Dassarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, John Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam Mc Candlish, Christopher Olah, Benjamin Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. Ar Xiv, abs/2204.05862, 2022. Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324 345, 1952. ISSN 00063444. URL http://www.jstor.org/stable/ 2334029. Daniele Calandriello, Zhaohan Daniel Guo, Remi Munos, Mark Rowland, Yunhao Tang, Bernardo Avila Pires, Pierre Harvey Richemond, Charline Le Lan, Michal Valko, Tianqi Liu, Rishabh Joshi, Zeyu Zheng, and Bilal Piot. Human alignment of large language models through online preference optimisation. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 5409 5435. PMLR, 21 27 Jul 2024. URL https://proceedings.mlr.press/v235/calandriello24a.html. Souradip Chakraborty, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, and Furong Huang. Transfer q star: Principled decoding for llm alignment. ar Xiv preprint ar Xiv:2405.20495, 2024. Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. Adversarially trained actor critic for offline reinforcement learning. In International Conference on Machine Learning, pages 3852 3878. PMLR, 2022. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. Reward model ensembles help mitigate overoptimization. ar Xiv preprint ar Xiv:2310.02743, 2023. Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. ar Xiv preprint ar Xiv:2405.07863, 2024. Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D Amour, DJ Dvijotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? Reward model ensembles mitigate but do not eliminate reward hacking. ar Xiv preprint ar Xiv:2312.09244, 2023. Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born again neural networks. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1607 1616. PMLR, 10 15 Jul 2018. URL https://proceedings.mlr.press/v80/furlanello18a.html. Yang Gao, Dana Alon, and Donald Metzler. Impact of preference noise on the alignment performance of generative language models. ar Xiv preprint ar Xiv:2404.09824, 2024a. Published in Transactions on Machine Learning Research (2/2025) Zhaolin Gao, Jonathan Daniel Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kianté Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, and Wen Sun. REBEL: Reinforcement learning via regressing relative rewards. In ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 2024b. URL https://openreview.net/forum?id=4SKid IUPP6. Gemini Team. Gemini: A family of highly capable multimodal models, 2024. URL https://arxiv.org/ abs/2312.11805. Alexey Gorbatovski, Boris Shaposhnikov, Alexey Malakhov, Nikita Surnachev, Yaroslav Aksenov, Ian Maksimov, Nikita Balagansky, and Daniil Gavrilov. Learn your reference model for real good alignment. ar Xiv preprint ar Xiv:2404.09656, 2024. Dirk Groeneveld, Iz Beltagy, Evan Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, William Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah Smith, and Hannaneh Hajishirzi. OLMo: Accelerating the science of language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15789 15809, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.841. URL https://aclanthology.org/2024.acl-long.841. Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, et al. Direct language model alignment from online ai feedback. ar Xiv preprint ar Xiv:2402.04792, 2024. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015. URL http://arxiv.org/abs/1503.02531. Tomasz Korbak, Ethan Perez, and Christopher Buckley. RL with KL penalties is better viewed as bayesian inference. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1083 1091, 2022. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179 1191, 2020. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. ar Xiv preprint ar Xiv:2309.00267, 2023. Yong Lin, Skyler Seto, Maartje ter Hoeve, Katherine Metcalf, Barry-John Theobald, Xuan Wang, Yizhe Zhang, Chen Huang, and Tong Zhang. On the limited generalization capability of the implicit reward model induced by direct preference optimization. ar Xiv preprint ar Xiv: 2409.03650, 2024. Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Provably good batch off-policy reinforcement learning without great exploration. Advances in neural information processing systems, 33: 1264 1274, 2020. Zhihan Liu, Miao Lu, Shenao Zhang, Boyi Liu, Hongyi Guo, Yingxiang Yang, Jose Blanchet, and Zhaoran Wang. Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer. ar Xiv preprint ar Xiv:2405.16436, 2024. Xin Mao, Feng-Lin Li, Huimin Xu, Wei Zhang, Wang Chen, and Anh Tuan Luu. Don t forget your reward values: Language model alignment via value-based calibration. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17622 17642, Miami, Florida, USA, November 2024. Association for Computational Published in Transactions on Machine Learning Research (2/2025) Linguistics. doi: 10.18653/v1/2024.emnlp-main.976. URL https://aclanthology.org/2024.emnlp-main. 976. Yu Meng, Mengzhou Xia, and Danqi Chen. Sim PO: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems, 2024. Ted Moskovitz, Aaditya K Singh, DJ Strouse, Tuomas Sandholm, Ruslan Salakhutdinov, Anca D Dragan, and Stephen Mc Aleer. Confronting reward model overoptimization with constrained rlhf. ar Xiv preprint ar Xiv:2310.04373, 2023. Sidharth Mudgal, Jong Lee, Harish Ganapathy, Ya Guang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng Tze Cheng, Michael Collins, Trevor Strohman, Jilin Chen, Alex Beutel, and Ahmad Beirami. Controlled decoding from language models. In Forty-first International Conference on Machine Learning, 2024. Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of preference optimisation with dpo-positive. ar Xiv preprint ar Xiv:2402.13228, 2024. Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. Disentangling length from quality in direct preference optimization. ar Xiv preprint ar Xiv:2403.19159, 2024. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2023. Rafael Rafailov, Yaswanth Chittepu, Ryan Park, Harshit Sikchi, Joey Hejna, W. Bradley Knox, Chelsea Finn, and Scott Niekum. Scaling laws for reward model overoptimization in direct alignment algorithms. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024a. URL https: //openreview.net/forum?id=pf4Ou Jyn4Q. Rafael Rafailov, Joey Hejna, Ryan Park, and Chelsea Finn. From r to q : Your language model is secretly a q-function. ar Xiv preprint ar Xiv:2404.12358, 2024b. Alexandre Rame, Guillaume Couairon, Corentin Dancette, Jean-Baptiste Gaya, Mustafa Shukor, Laure Soulier, and Matthieu Cord. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. Advances in Neural Information Processing Systems, 36, 2024. Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, and Johan Ferret. WARM: On the benefits of weight averaged reward models. ar Xiv preprint ar Xiv:2401.12187, 2024. Noam Razin, Sadhika Malladi, Adithya Bhaskar, Danqi Chen, Sanjeev Arora, and Boris Hanin. Unintentional unalignment: Likelihood displacement in direct preference optimization. ar Xiv preprint ar Xiv: 2410.08847, 2024. Mathieu Rita, Florian Strub, Rahma Chaabouni, Paul Michel, Emmanuel Dupoux, and Olivier Pietquin. Countering reward over-optimization in LLM with demonstration-guided reinforcement learning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics ACL 2024, pages 12447 12472, Bangkok, Thailand and virtual meeting, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl.740. URL https://aclanthology.org/ 2024.findings-acl.740. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. In In Proceedings of ICLR, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. ar Xiv preprint ar Xiv:1707.06347, 2017. Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. A long way to go: Investigating length correlations in rlhf. ar Xiv preprint ar Xiv:2310.03716, 2023. Published in Transactions on Machine Learning Research (2/2025) Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008 3021, 2020. Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, and Alekh Agarwal. A minimaximalist approach to reinforcement learning from human feedback, 2024. Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, JeffSchneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, and Aviral Kumar. Preference fine-tuning of llms should leverage suboptimal, on-policy data. ar Xiv preprint ar Xiv:2404.14367, 2024. Yunhao Tang, Daniel Zhaohan Guo, Zeyu Zheng, Daniele Calandriello, Yuan Cao, Eugene Tarassov, Rémi Munos, Bernardo Ávila Pires, Michal Valko, Yong Cheng, and Will Dabney. Understanding the performance gap between online and offline alignment algorithms. ar Xiv preprint ar Xiv: 2405.08448, 2024a. Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. Generalized preference optimization: A unified approach to offline alignment. ar Xiv preprint ar Xiv:2402.05749, 2024b. Michael Völske, Martin Potthast, Shahbaz Syed, and Benno Stein. TL;DR: Mining Reddit to learn automatic summarization. In Lu Wang, Jackie Chi Kit Cheung, Giuseppe Carenini, and Fei Liu, editors, Proceedings of the Workshop on New Frontiers in Summarization, pages 59 63, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-4508. URL https://aclanthology. org/W17-4508. Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. Advances in neural information processing systems, 34:6683 6694, 2021. Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for RLHF under KL-constraint. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview.net/forum? id=c1AKc A6ry1. Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. Is DPO superior to PPO for LLM alignment? a comprehensive study. ar Xiv preprint ar Xiv:2404.10719, 2024. Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan L. Yuille. Training deep neural networks in generations: a more tolerant teacher educates better students. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI 19/IAAI 19/EAAI 19. AAAI Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/aaai.v33i01.33015628. URL https://doi.org/ 10.1609/aaai.v33i01.33015628. Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. Combo: Conservative offline model-based policy optimization. Advances in neural information processing systems, 34:28954 28967, 2021. Andrea Zanette, Martin J Wainwright, and Emma Brunskill. Provable benefits of actor-critic methods for offline reinforcement learning. Advances in neural information processing systems, 34:13626 13640, 2021. Yongcheng Zeng, Guoqing Liu, Weiyu Ma, Ning Yang, Haifeng Zhang, and Jun Wang. Token-level direct preference optimization. ar Xiv preprint ar Xiv:2404.11999, 2024. Yuanzhao Zhai, Han Zhang, Yu Lei, Yue Yu, Kele Xu, Dawei Feng, Bo Ding, and Huaimin Wang. Uncertaintypenalized reinforcement learning from human feedback with diverse reward lora ensembles. ar Xiv preprint ar Xiv:2401.00243, 2023. Published in Transactions on Machine Learning Research (2/2025) Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. Provable offline preferencebased reinforcement learning. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=t VMPf EGT2w. Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. ar Xiv preprint ar Xiv:2305.10425, 2023. Banghua Zhu, Michael Jordan, and Jiantao Jiao. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 43037 43067. PMLR, 23 29 Jul 2023. URL https://proceedings.mlr.press/v202/zhu23f.html. Banghua Zhu, Michael I Jordan, and Jiantao Jiao. Iterative data smoothing: Mitigating reward overfitting and overoptimization in rlhf. ar Xiv preprint ar Xiv:2401.16335, 2024. Published in Transactions on Machine Learning Research (2/2025) A.1 Proof of Proposition 1 Proposition (Proposition 1 restated). Under Assumption 1, for any (y, y ) such that y = yw i and y = yℓ i for some i, we have πθ (y)πref(y ) πθ (y )πref(y) , for all global minimizers πθ of the DPO objective in (6), for any β > 0. Proof. Since all the preference pairs (y, y ) are mutually disjoint, and θy is specific to each y, the DPO objective over Dpref is convex in = { 1, . . . , n}, where i = β log πθ(yw i )πref(yℓ i) πθ(yℓ i)πref(yw i ). (15) Furthermore, the different i are completely independent from each other due to the preference pairs being disjoint, so they can be optimized over separately. In particular, for every i we have that lim i log (σ ( i)) = 0, (16) which implies that = { }n is the unique global minimizer of the DPO loss over Dpref in the space of s, and any θ that is a global minimizer must therefore satisfy log πθ(yw i )πref(yℓ i) πθ(yℓ i)πref(yw i ) = . (17) A.2 Proof of Corollary 1 Corollary (Corollary 1 restated). Under Assumption 1, further assume that 0 < πref(y) < 1 for all y. Then πθ is a global minimizer of the DPO objective in (6) iffπθ (C(yℓ)c) 1 with πθ (yw i ) > 0 i [n], where C(yℓ)c is the complement of the set of all responses y that appear as a dispreferred yℓ i for any i [n]. Proof. Following the same argument of the proof of Proposition 1, we have that all global minimizers θ of the DPO satisfy i = , which in turn implies that πθ (yw i )πref(yℓ i) πθ (yℓ i)πref(yw i ) = . (18) Since πref(y) is assumed to satisfy 0 < πref(y) < 1 for all y, this implies that all θ satisfy πθ (yw i ) πθ (yℓ i) = , (19) which further implies that πθ (yℓ i) = 0 and πθ (yw i ) > 0 for all i [n], as πθ (yw i ) 1 for any yw i . Aggregating C(yℓ) = {y: i [n] s.t yℓ i = y} (20) then gives that πθ (C(yℓ)) = X y C(yℓ) πθ (y) = 0 = πθ (C(yℓ)c) = 1. (21) Published in Transactions on Machine Learning Research (2/2025) To prove the converse, let πθ be a policy that satisfies πθ (C(yℓ)c) = 1, with πθ (yw i ) > 0, i [n],. As πθ (y) 0 for all y, this implies that πθ (yℓ i ) = 0 i [n]. Then, we have πθ (yw i ) πθ (yℓ i) = , (22) which by Proposition 1 implies that πθ is a global optimum. A.3 Proof of Theorem 1 Theorem (Theorem 1 restated). Let Y denote the set of all possible responses for any model πθ. Assume that supp(πref(y | x)) = Y, i.e., the reference policy may generate any outcome with non-zero probability. Further, let supp(ρ(x, y1, y2)) = supp(µ(x)) Y Y. Let πθ (y | x) argminπθ Ldistill(r , πθ; ρ) be a minimizer over all possible policies, of the implicit reward distillation loss in (7), for which r (x, y) is assumed to be deterministic, and finite everywhere. Then for any β > 0, πθ also maximizes the alignment objective in (1). Proof. We know that the optimal policy for the RLHF objective (1) is given by πθ (y|x) πref(y|x) exp(r (x, y)/β). Plugging this policy into the distillation objective (7), we see that Ldistill(r , πθ , ρ) = 0 for all ρ. In fact, the loss is equal to 0 pointwise, meaning that πθ is a global minimizer of the distillation objective (7). Further, let π be some other minimizer of Ldistill(r , , ρ). Then π also has to attain a loss of 0 at all (x, y, y ) in the support of ρ, meaning that log π(y|x) log π(y |x) = log πθ (y|x) log πθ (y|x) for all (x, y, y ) in the support of ρ. Consequently, the two policies coincide in the support of ρ (due to the normalization constraint, there is no additional offset term allowed as the support of ρ covers all of Y). Finally, noting that the support of the chosen ρ is such that πθ puts no mass outside its support due to the KL constraint in (1), we complete the proof. A.4 Proof of Theorem 2 Theorem (Theorem 2 restated). Define the constrained minimizer πθ (y | x) argmin πθ Pβ(S) βEµ(x)DKL(πref( | x) πθ( | x)), where Pβ(S) is the set of all possible policies with implicit reward models that are consistent with any target reward model ri tgt S, i.e., Pβ(S) {πθi}|S| i=1 where πθi πref(y | x) exp 1 β ri tgt(x, y). Then for any β > 0, πθ also maximizes the pessimistic alignment objective in (8). Proof. Consider the pessimistic objective: max πθ min rtgt S Eµ(x) h Eπθ(y|x)[rtgt(x, y)] Eπref(y|x)[rtgt(x, y)] i βDKL(πθ πref). (23) As it is linear in rtgt and convex in π, we can switch the order of min and max: max π Π Eµ(x) h Eπ(y|x)[rtgt(x, y)] Eπref(y|x)[rtgt(x, y)] i βDKL(π πref) . (24) Note that every rtgt S can be written in terms of the KL-constrained policy π rtgt it induces, i.e., rtgt(x, y) = β log π rtgt(y | x) πref(y | x) + β log Z(x, rtgt), (25) π rtgt = argmax πθ Eµ(x)Eπθ(y|x)[rtgt(x, y)] βDKL(πθ πref) (26) Published in Transactions on Machine Learning Research (2/2025) which has the form π rtgt(y | x) = 1 Z(x, rtgt)πref(y | x) exp 1 β rtgt(x, y) (27) where Z(x, rtgt) is the partition function: Z(x, rtgt) = X y Y πref(y | x) exp 1 β rtgt(x, y) . (28) Substituting π rtgt in for maxπ and writing rtgt in terms of π rtgt, we get the simplified objective max π Π Eµ(x) h Eπ(y|x)[rtgt(x, y)] Eπref(y|x)[rtgt(x, y)] i βDKL(π πref) = min rtgt S Eπ rtgt(y|x) β log π rtgt(y | x) πref(y | x) + β log Z(x, rtgt) β log π rtgt(y | x) πref(y | x) + β log Z(x, rtgt) (29) βDKL(π rtgt πref | x) = min rtgt S β Eµ(x) DKL(π rtgt πref | x) + DKL(πref π rtgt | x) DKL(π rtgt πref | x) = min rtgt S βEµ(x) h DKL(πref π rtgt | x) i . A.5 Proof of Proposition 2 Proposition (Proposition 2 restated). Let yw, yℓ Vn be preferred versus dispreferred outputs of length n, respectively, with πref(yw), πref(yℓ) > 0 and corresponding count vectors c(yw), c(yℓ). Let log πθ(y) = c(y) θ n Z(θ) for Z(θ) = log PV i eθi, with upper bound log πθ(y) = c(y) θ n maxj θj. Let θ(t) represent the parameters of π after t steps of gradient descent on Ldpo({yℓ, yw, x}), with θ(0) = 0. Then, we have that πθ(t)(yw) πθ(t)(yw) πθ(t 1)(yw) for all t, with strict inequality when ||c(yw) c(yℓ)||0 > 1. Proof. Let = [c(yw) c(yℓ)] and ρ = πref(yw)/πref(yℓ). The theorem assumes |yw| = |yℓ|. Then Ldpo = log σ (β( θ) + β log ρ) . The derivative with respect to θ is, θ = (1 σ(β( θ) + β log ρ))β = p(yℓ yw; θ)β 0. (30) Let δt = βp(yℓ yw; θ(t)). Then, πθ(t) =θ(t) c(yw) n max j θ(t) j (31) =(θ(t 1) + δt ) c(yw) n max j (θ(t 1) j + δt j) (32) =θ(t 1) c(yw) n max j θ(t 1) j + δt c(yw) nδt max j j (33) = πθ(t 1) + δt c(yw) n max j j = πθ(t 1) + δt j cj(yw)( j max j j ) πθ(t 1). (35) Published in Transactions on Machine Learning Research (2/2025) We obtain maxj θ(t 1) j + δt j = maxj θ(t 1) j + maxj δt j from the fact that θ(0) = 0 and therefore j arg max implies j arg max θ(t ) for all t > 0. The second-to-last step uses n = PV j cj(yw) and the final step uses j max j j . Finally, we have πθ(t)(y) πθ(t)(yw) because Z(θ) = log P j exp θj log maxj exp θj = maxj θj. A.6 Proof of Proposition 3 Proposition (Proposition 3 restated). Let yw and yℓbe preferred versus dispreferred outputs of length n. Let = c(yw) c(yℓ) be the difference in unigram counts. Let ˆy = [i, i, . . . , i], for i arg max , with c(ˆy) 1 = n. Then πθ(t)(yw) πθ(t)(ˆy) = τ(t)k for some k 0 and some non-decreasing τ : Z+ R+. Proof. Applying gradient descent with learning rate η to the gradient from Equation (30), at each step t the parameters are, θ(t) =θ(t 1) + ηβp(yℓ yw; θ(t 1)) = t =1 ηβp(yℓ yw; θ(t )) = τ(t) . (36) Plugging these parameters into the likelihoods, ℓθ(t)(c(yw)) ℓθ(t)(ˆy) = c(yw) θ(t) n Z(θ(t)) c(ˆy) θ(t) + n Z(θ(t)) (37) = (c(yw) c(ˆy)) θ(t) = (c(yw) c(ˆy)) (τ(t) ) (38) = τ(t)(c(yw) n max ) = τ(t)k, (39) with k 0 by c(yw) ||c(yw)||1 || || = n max . A.7 Proof of Proposition 4 Proposition (Proposition 4 restated). Assume the same conditions as Theorem 1. Then for any 0 < γ < , there exists a λ 0 such that πθ (y | x) argminπθ Lpdistill(S, πθ; ρ), where πθ is a minimizer over all possible policies of the objective (9), for the effective reward model set n r: Eρ(x,y1,y2) (ri tgt(x, y1) ri tgt(x, y2) r(x, y1) + r(x, y2))2 λ o . Proof. The proof is a standard Lagrangian duality argument, which we reproduce here for completeness. For two functions f(z) and g(z), let us define z = argmin z f(z) + αg(z). (40) Let us also consider the constrained problem z = argmin z f(z) s.t. g(z) g(z ). (41) Suppose by contradiction that z is not a minimizer of (41). Since z is feasible for the constraint by construction, we get that f(z ) < f(z ). Consequently, we further have f(z ) + αg(z ) < f(z ) + αg(z ), where the inequality follows from the feasibility of z in (41). This contradicts the optimality of z in (40), meaning that z must be a minimizer of (41). Applying this general result with f = βEµ(x)DKL(πref(y | x) πθ(y | x)), g = minri tgt S Ldistill(ri tgt, πθ; ρ), and z = πθ completes the proof, since we recognize the set Sγ in (12) to be equivalent to S ri tgt S Ldistill(ri tgt, πθ; ρ) λ. Published in Transactions on Machine Learning Research (2/2025) A.8 Proof of Proposition 5 Proposition (Proposition 5 restated). Let ˆLpdpo represent a finite-sample approximation to Lpdpo with the empirical forward KL term ˆΩ(Θ). For a fixed ˆπθ(yw i ) and α > 1, the argminπθ(yℓ) ˆLpdpo is min 1 ˆπθ(yw i ), ˆπθ(yℓ i) , with log ˆπθ(yℓ i) = 1 β log (α 1) + log ˆπθ(yw i ) + log πref(yℓ i ) πref(yw i ). Proof. We differentiate Lpdpo with respect to ψℓ= πθ(yℓ)/πref(yℓ) with i implicit, obtaining, ψℓ =β ψβ ℓ ψβ w + ψβ ℓ ψ 1 ℓ β αψ 1 ℓ = βψ 1 ℓ ψβ ℓ ψβ w + ψβ ℓ α 1 ! which is zero when, αψβ ℓ=ψβ w + ψβ ℓ (43) 1/β ψw (44) β log(α 1) + log ψw (45) log πˆθ(yℓ) = log πref(yℓ) 1 β log (α 1) + log πθ(yw) log πref(yw). (46) By the second-order condition, the critical point is a minimum. The objective Lpdpo is the sum of two components: the negative log sigmoid term for Li and the negative log probability for ˆΩ. Because each component is a convex function of ψi, so is Lpdpo. As a result, the local minimum log ˆπθ(yℓ) is also a global minimum. A.9 Proof of Proposition 6 Proposition (Proposition 6 restated). For any fixed ˆπθ(yw i ) and β > 0, the argmin of the distilled DPO objective in (7) is min(1 ˆπθ(yw i ), ˆπθ(yℓ i)), with log ˆπθ(yℓ i) = 1 β (rt(x, yℓ i) rt(x, yw i ))+log ˆπθ(yw i )+log πref(yℓ i ) πref(yw i ). Proof. This follows directly from differentiating (7) with respect to πθ(y2). A.10 Proof of Proposition 7 Proposition (Proposition 7 restated). Let D = {(i, i + 1) : i 1, 2, . . . , n} for n > 2. Let D be the dataset arising from the transitive closure of D. Assume πref is indifferent to all (yi, yj). Let ψ(D) = maxi ψ(D) i mini ψ(D) i . Then ψ(D) = (n 1)τ 1 > ψ(D) = 2 n 1 Proof. For D, the IPO objective can be minimized at zero, so that ψ(D) = (n 1)τ 1. For D, each adjacent pair of completions is separated by γ, and the objective is Pn 1 i=1 (n i)(iγ τ 1)2. The minimum is γ = n(n+1)(n 1)/6 n2(n+1)(n 1)/12τ 1 = 2 nτ 1, so that ψ(D) = (n 1)γ = 2 n 1 n τ 1 < (n 1)τ 1 = ψ(D) for n > 2. B Additional results B.1 The effect of distillation and pessimism on likelihood collapse Figure B.1 tests the effect of pessimism in both DPO (DPO vs p-DPO) and distilled DPO (d-DPO vs dp-DPO) on likelihood collapse in the preferred vs. dispreferred generations in the offline data. Note that a subtle point arises from the distinction between regularizing to the reference policy output distribution versus regularizing to the preference data distribution, which are only (asymptotically) identical if preferences are annotated on Published in Transactions on Machine Learning Research (2/2025) 1000 2000 3000 4000 5000 step preferred_log_likelihood 1000 2000 3000 4000 5000 step dispreferred_log_likelihood 1000 2000 3000 4000 5000 step kl_loss_ref 0 2000 4000 step return_without_kl bias=0.4 DPO, = 1.0 p-DPO, = 1.0 1000 2000 3000 4000 5000 step preferred_log_likelihood 1000 2000 3000 4000 5000 step dispreferred_log_likelihood 1000 2000 3000 4000 5000 step kl_loss_ref 0 2000 4000 step return_without_kl bias=0.4 d-DPO, = 0.01 d-p-DPO, = 0.01 Figure B.1: Pessimism mitigates likelihood collapse. In this case, we penalize the sample KL-divergence rather than the data KL-divergence, because the samples are not drawn from the preference data distribution. Thus the effect of regularization is primarily on kl_loss_ref, which quantifies this divergence, rather than on the likelihood of the preference data itself, (dis)preferred_log_likelihood. a sample from the reference policy (in our initial experiments we had found slightly better results overall when regularizing to the reference policy output distribution). Figure B.1 reports results with respect to both distributions, showing that pessimism (a) mitigates the decrease in probability of preferred and dispreferred preference annotations, despite this data not being used in the regularizers (left-most subplots), and (b) mitigates the increase in KL divergence with respect to the reference distribution (third subplot from left), as expected due to the additional regularization term. As argued in 4 (see also Azar et al. (2024)), the β hyperparameter of DPO does not effectively regularize this KL distribution because the implicit DPO reward model assigns infinite-magnitude rewards. B.2 Transitive closure Figure B.2 shows the results of our multi-arm bandit experiments with p-DPO vs. IPO losses, as described in 7.2. In this synthetic setup, we solve the p-DPO and IPO objectives for both D = {(y1, y2), (y2, y3)} and D = D {(y1, y3)}, solving with respect to {πθ(yi)}. B.3 Distribution over reward models for e-DPO Figure B.3 investigates the reason for the success of e-DPO, especially when ρ < .5. For every length bias, we track during training for all training examples which reward model, rρ,b, best matched the implicit reward of the currently trained e-DPO policy, and plot the distribution over reward models. The policy matches different reward models in different examples. Moreover, there is inverse correlation between the data bias for policy training (ρ) and the data bias for training the reward models (b). This suggests that the ensemble in e-DPO helps as the policy is distilling from reward models that do not share the data bias of the policy training set. B.4 Hyperparameters Validation set performance across the range of hyperparameter settings is shown in Figure B.4. Figure B.5 also explores the impact of γ (which is minimal). In pilot studies we found that these results were relatively robust to variation in the random seed, but did not conduct extensive investigation of this effect across all methods and hyperparameters due to cost. Published in Transactions on Machine Learning Research (2/2025) P-DPO: (y1) P-DPO: (y2) P-DPO: (y3) beta 1.0 3.0 10.0 30.0 transitively_closed False True transitively_closed Figure B.2: Effect of transitive closure on p-DPO and IPO solutions to preference learning in a multi-arm bandit. Each column shows the learned policy probability for a given arm, based on the preferences y1 y2 y3. The top row shows that in p-DPO, the probabilities are not materially affected by the transitive closure y1 y3. The bottom row shows that in IPO, transitive closure causes the probabilities to be compressed. In each subfigure, we sweep a range of effective values of τ 1, shown on the x-axis. C Compute resources We train policies on 32 TPU v3 chips and reward models on 16 TPU v3 chips. We obtain roughly 0.1 steps per second when training, for both the policy and reward models. Published in Transactions on Machine Learning Research (2/2025) Figure B.3: For all training examples, we track which reward model during training best matches the implicit reward of the current e-DPO policy and plot the distribution over reward models, for every length bias, ρ. We observe that the e-DPO policy matches different reward models across examples during training. Moreover, when the policy is trained with data biased towards preferring short responses, the reward model that was trained on longer responses is by and large preferred and vice versa. oracle reward 0.01 0.1 1.0 3.0 10.0 30.0 100.0 ipo_tau_start 0.1 0.3 1.0 3.0 5.0 10.0 25.0 oracle reward 0.01 0.1 1.0 3.0 10.0 30.0 100.0 0.01 0.1 1.0 3.0 10.0 30.0 100.0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 bias ( ) oracle reward 0.01 0.1 1.0 3.0 10.0 30.0 100.0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 bias ( ) 0.01 0.1 1.0 3.0 10.0 30.0 100.0 Figure B.4: Validation set results across hyperparameters for each method. For all methods, different values of ρ induce different optimal hyperparameters β and τ 1. Published in Transactions on Machine Learning Research (2/2025) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 bias dpo_gamma_start with annealing w/o annealing Figure B.5: Comparing e-dpo with and without annealing of γ on the development set for all length biases using the best value of β per bias. Annealing γ has minimal effect on performance and is not necessary for the success of e-dpo.