# nash_learning_from_human_feedback__95645bb9.pdf Nash Learning from Human Feedback R emi Munos * 1 Michal Valko * 1 Daniele Calandriello * 1 Mohammad Gheshlaghi Azar * 1 Mark Rowland * 1 Daniel Guo * 1 Yunhao Tang * 1 Matthieu Geist * 1 Thomas Mesnard 1 Cˆome Fiegel 2 Andrea Michi 1 Marco Selvi 1 Sertan Girgin 1 Nikola Momchev 1 Olivier Bachem 1 Daniel J. Mankowitz 1 Doina Precup 1 Bilal Piot * 1 Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human preferences. Traditionally, RLHF involves the initial step of learning a reward model from pairwise human feedback, i.e., expressed as preferences between pairs of text generations. Subsequently, the LLM s policy is fine-tuned to maximize the reward through a reinforcement learning algorithm. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a pairwise preference model, which is conditioned on two inputs (instead of a single input in the case of a reward model) given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deeplearning architectures. We illustrate the effectiveness of our approach by presenting experimental *Equal contribution 1Google Deep Mind 2ENSAE Paris Now at Cohere. Correspondence to: Remi Munos , Michal Valko , Daniele Calandriello , Bilal Piot . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). results on a text summarization task. We believe NLHF offers a compelling avenue for fine-tuning LLMs and enhancing the alignment of LLMs with human preferences. 1. Introduction Large language models (LLMs) (Glaese et al., 2022; Anil et al., 2023; Open AI, 2023; Ouyang et al., 2022) have made remarkable strides in enhancing natural language understanding and generation. Their success in conversational applications often relies on aligning these models with human preferences, a process primarily guided by the paradigm of reinforcement learning from human feedback (RLHF). A prevailing approach within RLHF involves the initial step of constructing a reward model based on pairwise human preferences, frequently employing the Bradley-Terry model (BT; Bradley & Terry, 1952). This reward model assigns an individual score to each generation of the language model conditioned on a given prompt, akin to how the Elo ranking system (Elo, 1978) assigns scores to chess players to estimate their relative strengths. Subsequently, model refinement takes place by optimizing the LLM s performance with respect to this reward model through reinforcement learning (RL) over sampled text generations. However, this BT model has its limitations, primarily coming from its inability to accommodate the full spectrum of possible preferences. For example, Bertrand et al. (2023) show the limitations of the Elo model by illustrating where Elo score alone cannot predict the right preferences, even in transitive situations. There are also situations where maximizing the Elo score is not aligned with maximizing the probability of winning against the corresponding population of players, even when the preference model can be perfectly expressed using a BT model (see Appendix A for an example). These observations highlight the necessity for a more profound understanding of the implications of BTbased reward maximization in RLHF for achieving genuine alignment with human preferences. Nash Learning from Human Feedback The NLHF approach: In this paper, we introduce an alternative pipeline for fine-tuning LLMs from human preference data, which we term Nash learning from human feedback (NLHF). In this framework, we depart from the conventional approach of learning a reward model and instead focus on learning a preference model and define our objective to compute the Nash equilibrium of this preference model. The preference model takes two responses, denoted as y and y (possibly conditioned on a prompt x), as input and produces a preference score P(y y |x), indicating the preference of response y over response y given the context x. We may think of P(y y |x) as the probability that a randomly chosen human prefers response y to response y In order to learn a preference model, we can initialize it using AI-feedback by leveraging a LLM prompted in a manner akin to how humans have been asked for their preference, such as by instructing the LLM to generate a 1-vs-2 comparison in response to a prompt like: Given x, which answer do you prefer, answer 1: y or answer 2: y ? . This initial preference model can be further refined, through supervised learning, by aligning it with human preference data. Notably, such a learnt preference model does not make any Bradley-Terry assumption thus has the potential to capture the diversity and richness of human preferences contained in the training data. Moreover, in contrast to the traditional RLHF setting where the reward model depends on the distribution of data that has been used to train the model, a preference model remains essentially invariant to this data distribution. The main reason why preference models are less sensitive to the data distribution than reward models is that preference models takes as input the two responses to be compared whereas the reward model makes an implicit comparison between the (single) input to the distribution it has been trained on (see Section 3.3). Once the preference model is established, our primary objective is to calculate the corresponding Nash equilibrium. This equilibrium represents a policy that consistently produces responses preferred, as determined by the preference model, over responses generated by any alternative policy. The three key properties of our approach, namely, the ability of the preference model to capture a wide spectrum of human preferences (contained in the data), its lower sensitivity on the data distribution, and the potential for the Nash equilibrium to provide a better alignment with the diversity of human preferences, mark a substantial departure from the conventional RLHF framework. We discuss these properties in greater detail in Section 3. Practical algorithms: To approximate the Nash equilibrium of the two-player game in which actions are responses, and payoffs are specified by the preference model, we employ a deep reinforcement learning algorithm. Given a prompt x, we generate two responses, denoted as y and y . The first response, y, is generated under the current policy πθ that we are in the process of optimizing. In contrast, the second response, y , is produced by an alternative policy π , which we implement in two different versions: Nash-MD and Nash-EMA (further elaboration on these versions will be provided below). Nash-MD defines the alternative policy π as a geometric mixture between the initial and the current policy (motivated by mirror descent), whereas Nash-EMA implements a first-order approximation of an exponential moving average (EMA) mixture of past policies. Then, the preference model computes P(y y |x), and this preference signal serves as a reward for optimizing our policy πθ using a (regularized) policy gradient algorithm, as outlined in (Geist et al., 2019). Our contributions: Our contributions in this work can be summarized as follows. First, we introduce the concept of Nash learning from human feedback (NLHF), framing it as the task of computing the Nash equilibrium for a general preference model. We proceed by introducing and defining a regularized variant of the preference model. We also establish the existence and uniqueness of the corresponding Nash equilibrium in this context. Then, we consider the case of tabular policy representations and introduce a novel algorithm named Nash-MD. This algorithm, founded on the principles of mirror descent (MD) possesses two important properties. First, it converges to the Nash equilibrium, with the final iteration reaching this equilibrium. This differs from conventional regret-minimization-based algorithms, where it is typically the mixture of past policies that converges, necessitating the storage of past policies. Secondly, Nash-MD learns by competing against alternative policies π that represent a (geometric) mixture between the current policy πθ and the initial policy. Importantly, this can be accomplished without the need to retain intermediate policies, a feature of particular significance in the context of LLMs with their substantial memory requirements. Additionally, we introduce Nash-EMA, a variation inspired by fictitious play, which uses an exponential moving average of past policy parameters. We introduce policy-gradient algorithms for deep learning architectures, Nash-MD-PG and Nash-EMA-PG, inspired by the tabular algorithms. We present the results of numerical experiments conducted on a text summarizing task utilizing the TL;DR dataset (V olske et al., 2017). In these experiments, we employ the NLHF approach to train several models. To assess their performance, we conduct a pairwise evaluation (using the Pa LM 2 Large LLM) of the performance of the models and include a comparison to an RLHF baseline. We conclude that NLHF Nash Learning from Human Feedback opens up new promising directions for aligning LLMs with human preferences. 2. Prior work Preference-based RL. Our contribution falls into a broader area of preference-based RL, where we directly learn from pairwise human preferences instead of a handdesigned or learned scalar reward (see, e.g., the survey by Wirth et al., 2017). The canonical form of RLHF was proposed in (Christiano et al., 2017) and popularized by (Open AI, 2022), in which one learns a scalar reward model from the preference feedback, followed by policy optimization against the reward model. However, an advantage of directly optimizing for preferences rather than a learnt scalar reward function is the potential to avoid reward hacking (Amodei et al., 2016), when agents find a way to maximize a reward without performing what was intended. Furthermore, in domains such as medical applications, it may not only be challenging but also undesirable to provide a single scalar reward. In general, the preference feedback can be provided in different ways, e.g., on the level of states, actions, or a full trajectory. In this work, we focus on the trajectory feedback where the experts provide feedback by selecting the preferred one of the two proposed trajectories. Such a simple form of pairwise feedback is the easiest to implement, and has seen applications in summarization (Stiennon et al., 2020), question-answering (Nakano et al., 2021; Menick et al., 2022) and general language-based assistants (Ouyang et al., 2022; Glaese et al., 2022; Bai et al., 2022). Ranking based algorithms in the RLHF literature include the RAFT (Dong et al., 2023) and Re ST (Gulcehre et al., 2023) approaches. More complicated forms of feedback has been studied in theoretical literature such the work of Efroni et al. (2021). Theoretical guarantees for learning from preferences. Learning policies from preference feedback of histories was studied by Akrour et al. (2011) who learned the preference model for histories and by Cheng et al. (2011) who trained a model ranking actions for a state. Busa-Fekete et al. (2014; 2013) approached this setting by comparing and ranking policies and Wilson et al. (2012) by learning a distribution over policy space. Preference-based RL is also explored in dueling RL (Novoseller et al., 2020; Pacchiano et al., 2023), which generalizes the well-studied dueling bandits problem. In particular, Pacchiano et al. (2023) assumes a Bradley-Terry model, which they estimate using maximum likelihood in the tabular setting. Our work is also related to results of Wang et al. (2023) who consider learning Nash equilibria of the human preference model, and reduce the problem to finding Nash equilibria for a special class of factored two-player Markov games under a restricted set of policies. The interaction of Nash equilibria and LLMs have been also explored by Jacob et al. (2023a;b). Moreover, Chen et al. (2022) gave first results for function approximation in preference-based RL, however with a computationally inefficient algorithm. Optimization without reward function. A number of recent works has attempted to optimize for preference feedback without learning a reward function. For example, Direct Preference Optimization (DPO; Rafailov et al., 2023) optimizes the policy through a loss function defined via the Bradley-Terry reward model. SLi C (Zhao et al., 2023) modifies the classical RLHF training loss by calibrating a ranking loss which contrasts a positive and a negative sequence. This resembles directly optimizing for the pairwise preference, albeit without convergence guarantees. Identity Preference Optimization (IPO; Azar et al., 2023) and the Generalized Preference Optimization (GPO; Tang et al., 2024) proposed to directly optimize the pairwise human preference with offline preference data by optimizing against a fixed opponent. And recently, it has been observed that the online version of IPO (Calandriello et al., 2024) approximates the Nash equilibrium of a preference model using a particular case of Nash-MD (called Self-Play). 3. The preference model and its Nash equilibrium We now introduce the core conceptual ideas behind our approach to learning from preference feedback. We consider a preference model in a contextual bandit setting. Given a context (or prompt) x in the context space X and two actions (or responses/choices) y and y in the action space Y, the preference of y over y given x is a number between 0 and 1 which is written P(y y |x). We will assume that the preference model is antisymmetric: P(y y |x) = 1 P(y y|x). In the context of LLMs we can think of the preference P(y y |x) as the probability that a randomly chosen human prefers a response y over the other response y given the context x. We define the preference between two distributions conditioned on a state x: P(π π |x) def = Ey π( |x),y π ( |x) [P(y y |x)] and the preference of an action over a distribution P(y π |x) def = Ey π ( |x) [P(y y |x)]. Finally, given a distribution ρ over contexts, we define the preference between two policies: P(π π ) def = Ex ρEy π( |x),y π ( |x) [P(y y |x)] . Nash Learning from Human Feedback We say that a policy π is preferred over (or simply wins against) another policy π if P(π π ) 1/2. In the remainder of the paper, we assume (without loss of generality) that ρ assigns every context positive probability. In this paper we will consider the objective of finding a policy π which is preferred over any other alternative policy: π def = arg max π min π P(π π ) . (1) This objective implicitly defines a two-player game, in which the players select policies π and π , the first player receiving a payoff of P(π π ), and the second player receiving P(π π) = 1 P(π π ). This is therefore a two-player, antisymmetric, constant-sum game, and it follows that when both players use a policy π solving Equation (1), this is a Nash equilibrium for this game, by the minimax theorem (von Neumann, 1928). This is the fundamental solution concept we study in this paper. The objective introduced in Equation (1) has two central differences relative to the majority of existing work on RLHF. First, the objective is expressed directly in terms of preferences themselves, not in terms of a reward function learnt from preferences, and also not in terms of a non-linear transformation of the preferences. Second, our solution concept relies on the notion of Nash equilibrium, rather than on optimization against a fixed behavior. We discuss the impact of both of these choices through several examples below. 3.1. Limited expressivity of reward models A learnt preference model possesses the capacity to encompass any property of human preferences as much as they are contained in the dataset used to train the model. For example they can model non-transitive preferences (see the examples in Appendix C), a characteristic not attainable by reward models since they inherently assign a single score to each policy. Whether humans exhibit non-transitive preferences or not has been a subject of longstanding research (see, for instance, (Tversky, 1969; Klimenko, 2015)). But even if single individuals are transitive, it is nevertheless possible that the resulting expected preference model is not (see Appendix C.2 for an example). Additionally, non-transitivity is not the only limitation of Bradley-Terry-based reward models; see, e.g., Example 3 in (Bertrand et al., 2023) where Elo score fails to capture the correct preference ordering between policies, even in transitive situations. In fact, we show in Appendix A that even when the preference model is perfectly captured by the Bradley-Terry model, optimization of the reward/Elo score may still disagree with any reasonable notion of preference optimization. Therefore, we can safely argue that preference models of- fer a more flexible and nuanced framework for modeling preferences than BT-based preference models models. 3.2. Alignment with diversity of human preferences Here, we illustrate that in some situations, the solution offered by the Nash equilibrium of the preference model (which we refer to as the NLHF solution) is more aligned with the diversity of human preferences than the optimum of the reward model (which we refer to as the RLHF solution). Consider the following situation where there are 3 different actions (y1, y2, y3) and we have a population composed of 3 types of humans with respective preferences P1, P2, P3, defined in the following way: Pi(y1 y2) = Pi(y1 y3) = Pi(y2 y3) = 1/2, for 1 i 3, except for the following cases: P1(y2 y1) = 1 (thus P1(y1 y2) = 0), P2(y1 y3) = 1 (thus P2(y3 y1) = 0), and P3(y3 y2) = 1 (thus P3(y2 y3) = 0). Now, let us assume these 3 types form a near-uniform distribution over humans, for example P(Type 1) = 1/3 ϵ, P(Type 2) = P(Type 3) = 1/3 + ϵ/2. The corresponding population preference is thus Pϵ = (1/3 ϵ)P1 + (1/3 + ϵ/2)(P2 + P3). In the case ϵ > 0 (so Type 1 is slightly less frequent than the other types) then a reward model will assign a slightly better reward (assuming a Bradley-Terry model) to action y1, thus optimizing the expected reward (the RLHF solution) will produce a deterministic policy choosing exclusively y1. However, here we are in a situation where the preferences are not uniformly aligned across humans (Moskovitz et al., 2023). In the case of uniform sampling of humans (i.e., ϵ = 0), the Nash equilibrium of Pϵ=0 is a uniform mixture between the 3 policies. Actually, the preference model Pϵ corresponding to any ϵ is defined as: Pϵ(y2 y1) = 2/3 ϵ/2, Pϵ(y3 y1) = 1/3 ϵ/4, Pϵ(y3 y2) = 2/3 + ϵ/4, Pϵ(yi yi) = 1/2, and Pϵ(yi yj) = 1 Pϵ(yj yi), for 1 i < j 3. By a simple calculation, we deduce that for any |ϵ| 1/3, the Nash equilibrium of this preference model consists in selecting y1 and y2 with probability 1/3 + ϵ/2 each, and y3 with probability 1/3 ϵ. We believe that in this situation, the Nash solution of the preference model (i.e., the NLHF solution), assigning close to uniform probability to these 3 actions (one being preferred by each category of humans) is more aligned with the diversity of human preferences than the optimum of the reward model (i.e., the RLHF solution), which would deterministically select a single action. Also the Nash equilibrium is less sensitive to the preference distribution, since the corresponding equilibrium is smooth w.r.t. change in the distribution over types of humans (i.e., when ϵ varies near 0), whereas the RLHF solution will switch from selecting exclusively y1 when ϵ > 0 to selecting exclusively y2 when Nash Learning from Human Feedback 3.3. Sensitivity to the data distribution Another difference between reward and preference models is that a reward model depends on the distribution it has been trained on, whereas a preference model essentially does not. Indeed, when we learn a reward model we are solving the following optimization problem (in the limit of an infinite amount of data): rπ def = arg max r( , ) E x ρ y, y π( |x) Z ν log σ(r(x, y Z w) r(x, y Z l )) , where y Z w and y Z l are respectively the preferred (and less preferred) response (among y and y ) according to a randomly sampled human Z ν, given x. The (optimal) solution to this problem rπ depends on the policy π that has generated the data. Indeed, as mentioned in the introduction (see Section 1), the reward model assigns an Elo score to each individual response, which is defined in terms of a comparison against other responses; thus, it depends on the overall distribution over responses it has been trained on. See Theorem 2 (Appendix B) for a precise statement. On the contrary, since the preference model takes two responses as input, the output does not depend directly on the distribution these responses have been sampled from. The preference model is simply learnt by supervised learning, where for each x, y, y , the preference model P(y y |x) is regressed to the human preference I{y is preferred to y given x} using a cross entropy loss: P def = arg max P( | ) E x ρ y π( |x) y π ( |x) Z ν log P(y Z w y Z l |x) , where y Z w (resp. y Z l ) is the preferred (resp. less preferred) response (among the responses y and y ) given x according to a random human Z ν. Notice that the optimal solution to this optimization problem is just the probability that a random human prefers y to y given x: for any x supp(ρ), y supp(π( |x)), y supp(π ( |x)), P (y y |x) = PZ ν (Human Z prefers y to y given x) . This quantity is just a function of x, y, y and does not depends on how x, y and y have been chosen, thus is independent of ρ, π or π . Thus in this ideal case of perfect representations, a preference model is insensitive to the data generation distribution (ρ, π, π ) whereas a reward model depends on it. Now, of course, when using approximate models or a finite amount of data, the learned preference model may still depend on the data distribution as the quality of the approximation depends on the local quantity of collected data. Thus it is our general expectation that the preference model is significantly less reliant on the specific policy that generated the data when compared to the reward model. This observation becomes even more important in scenarios where multiple iterations of RLHF/NLHF occur, comprising data collection, constructing a reward/preference model, policy optimization based on the model, and collecting new data following the updated policy. In the case of RLHF, the reward model from a prior iteration diverges from the next iteration due to shifts in data distributions, necessitating complete relearning. On the contrary, in the NLHF approach, the preference model can be preserved and further enriched through the introduction of novel data, thereby offering a more seamless and efficient adaptation process. 4. Regularized preference model We now consider a regularized version of the preference model. This is motivated by situations where the preference model is more accurately estimated along responses obtained by following a given reference policy. This could be the policy responsible for generating the data used to train the preference model or include situations where it is imperative to ensure that our solution remains close to a known safe policy. In such cases, we incorporate a penalty mechanism into our preference model, employing KL-regularization to quantify the divergence between the policy under consideration and a designated reference policy denoted as µ; see (Jaques et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022) for further details on the role of KL-regularization in RLHF. The regularized preference between actions y π( |x), y π ( |x) is defined as Pπ,π τ (y y |x) def = P(y y |x) τ log π(y|x) µ(y|x)+τ log π (y |x) and we define accordingly the KL-regularized preference between policies: Pτ(π π ) def = Ex ρ,y π( |x),y π ( |x) h Pπ,π τ (y y |x) i = P(π π ) τKLρ(π, µ) + τKLρ(π , µ) (2) where KLρ(π, µ) def = Ex ρ[KL(π( |x), µ( |x))]. We now state the existence and uniqueness of the Nash equilibrium of this regularized preference model (see the proof in the Appendix E): Proposition 1 (Nash equilibrium). There exists a unique Nash equilibrium of the regularized preference model Pτ. 5. Algorithms for approximating the Nash eq. The regularized preference model Pτ(π π ) defines a constant-sum two-player game where Player 1 selects π and Nash Learning from Human Feedback Player 2 selects π . There are well-known techniques for approximating the Nash equilibrium. Some of them offer a convergence on average (in the sense that it is a mixture of the sequence of policies that converges to the Nash equilibrium), whereas other methods offer convergence of the last iterate. Convergence on average. Fictitious play (FP; Brown, 1951; Robinson, 1951; Heinrich et al., 2015; Fudenberg & Levine, 1998) consists in playing, at every iteration, each player s best response against the uniform mixture of the opponent s past strategies. Here we would define πt+1 def = arg maxπ P(π πt), where πt is the mixture policy 1 t Pt s=1 πs. It is known that the mixture policy πt converges to the Nash equilibrium in constant-sum games (see (Hofbauer & Sorin, 2006) for a reference in the general concave-convex case considered here). Also, FP has been considered with function approximation (Heinrich & Silver, 2016). Online convex optimization: In the context of solving convex-concave constant-sum games, we rely on online convex optimization where each player minimizes its own convex loss. See for example (Cesa-Biachi & Lugosi, 2006; Nesterov, 2005; Hoda et al., 2010). Regret minimization has been extensively considered in games since the average strategy of self-playing no-regret algorithms converges to a Nash equilibrium (Rakhlin & Sridharan, 2013; Kangarshahi et al., 2018). Counterfactual regret minimization (CFR) has been considered in the setting of imperfect information games in (Zinkevich et al., 2007) showing a O(1/ t) convergence rate in terms of exploitability. Other techniques provide a faster rate of convergence O(1/t) (Daskalakis et al., 2011; Syrgkanis et al., 2015; Abernethy et al., 2018; Farina et al., 2019). These techniques have been usually studied in the discrete time setting but has also been looked at in continuous time (Mertikopoulos et al., 2018). Convergence of the last iterate. Extragradient or optimistic mirror descent methods have been proven to converge to a Nash equilibrium (Korpelevich, 1976; Mertikopoulos et al., 2019) with possibly an exponential rate in unconstrained spaces (Mokhtari et al., 2020). The most closely related extragradient method in this domain is optimistic multiplicative-weights-update (OMWU; Daskalakis & Panageas, 2019) which provides convergence guarantees to the Nash equilibrium of the last iterate. Another approach uses the Frank-Wolfe method to compute Nash equilibria in normal-form games (Gidel et al., 2016), although convergence is attained at the same rate as for fictitious play. A related algorithm introduced by Munos et al. (2020) for imperfect information games consists in each player doing a step of mirror ascent against an improved opponent (MAIO) for which exponential convergence of the last-iterate was proven (with a instance-dependent exponent). Other works include the regularized Nash dynamics (Perolat et al., 2021; 2022) under continuous-time dynamics, the Magnet Mirror Descent (Sokota et al., 2023) which are related to Online Mirror Descent (OMD) performed on the regularized game, and the MTPO algorithm of Shani et al. (2024) in the context of multi-turn LLMs. A thorough comparison between Nash-MD and OMD is given in the next section. 6. Analysis of a tabular algorithm: Nash-MD For simplicity of notation we remove the dependence on the context x, thus policies π (Y) are probability distributions over Y. We now introduce an algorithm, called Nash MD, which is a novel variant of mirror descent (Nemirovski & Yudin, 1983; Bubeck, 2015; Lattimore & Szepesv ari, 2020) that makes use of a specific regularized policy πµ t which is a geometric mixture between the current policy πt and the reference policy µ. We prove the convergence (in terms of KL distance) of the last iterate to the Nash equilibrium of Pτ. The Nash-MD algorithm: Define the regularized policy πµ t as a geometric mixture between the current policy πt and the reference policy µ: πµ t (y) def = πt(y)1 ηtτµ(y)ηtτ P y πt(y )1 ηtτµ(y )ηtτ , (3) where ηt is a learning rate. We define the Nash-MD algorithm as a step of mirror descent relative to the regularized policy πµ t : πt+1 def = arg max π [ηt P(π πµ t ) KL(π, πµ t )] . (4) The optimization above can also be made explicit in the following form: πt+1(y) πµ t (y) exp (ηt P(y πµ t )) , or equivalently log πt+1(y) = (1 ηtτ) log πt(y) + ηtτ log µ(y) +ηt P(y πµ t ) + c, (5) where c is a normalization constant that is independent of y. The intuition for this algorithm is to improve the current policy πt in a direction that increases the preference π 7 P(π, πµ t ) against the regularized policy πµ t , while not deviating too much from it. We now state our main theoretical result; see Appendix D for the proof. Theorem 1. Let π τ be the Nash equilibrium of the regularized preference model: Pτ(π π ) = P(π π ) τKL(π, µ) + τKL(π , µ). At every iteration t we have that KL(π τ, πt+1) (1 ηtτ)KL(π τ, πt) + 2η2 t . (6) Nash Learning from Human Feedback We deduce that for the choice ηt = 2/(τ(t + 2)) we have KL(π τ, πT ) 8 τ 2(T + 1). Thus this algorithm produces a sequence of policies (πt)1 t T with last-iterate convergence (in KL-divergence) to the regularized Nash equilibrium π τ at a speed O(1/T). We now mention several important features of this algorithm, specially in the context of LLMs. Nash-MD does not require playing against the full mixture πt. In order to compute πt+1 we do not need to play against the mixture πt = 1 t Pt s=1 πs of past policies (where by playing against a policy π we mean computing (or estimating) the preference P(y π)), unlike in fictitious play. We play against a single (geometric) mixture πµ t between the current policy πt and the reference policy µ. This is important in situations, such as in LLMs, where storing and generating sample from several policies is costly. Nash-MD has a last-iterate convergence property. The second important property of Nash-MD is that we have convergence of the last-iterate (i.e., the current policy πt converges to π τ) and not only convergence on average (as is typically the case of fictitious play and usual regret minimization algorithms like CFR and OMD). This feature is particularly important in the context of LLMs as well due to the substantial memory resources that would be otherwise needed to store a mixture policy like πt. Comparison with online mirror descent (OMD). In general the analysis of constant-sum concave-convex games can be performed in the framework of online convex optimization where the goal is to find a sequence of solutions πt that minimizes the sum of a sequence of convex loss functions π 7 lt(π). The OMD algorithm (using the KL as Bregman divergence) defines the sequence: πt+1 def = arg min π [ηt lt(πt) (π πt) + KL(π, πt)] , (7) for which it can be shown (see Cesa-Biachi & Lugosi (2006)) that the average cumulative regret, under optimal choice of learning rate, can be bounded as t=1 lt(πt) min π 1 T t=1 lt(π) = O 1/ This type of upper bound on the regret can be further used to obtain convergence of constant-sum games where each player follows an OMD strategy to minimize their own convex loss. In our context, we could apply this OMD strategy to minimize the regularized preference model Pτ, and since Pτ is antisymmetric, we only need to consider the dynamics of a single player. So the loss function at time t is the negative preference against the current policy of the opponent: lt(π) = Pτ(π πt). We deduce that lt(πt) = [ πPτ(π πt)]π=πt, thus lt(πt) π = P y π(y) h P(y πt) τ log πt(y) µ(y) + 1 i . The OMD update rule in Equation (7) can be rewritten as πt+1 = arg max π y π(y) P(y πt) τ log πt(y) KL(π, πt) i . Now, using the regularized policy πµ t introduced in Equation (3), we can rewrite this update rule as πt+1 = arg max π [ηt P(π πt) KL(π, πµ t )] . (8) Comparing Equation (4) and Equation (8) we notice that both OMD and Nash-MD make use of the same KL penalty term KL(π, πµ t ). However they differ in the fact that OMD optimizes the preference π 7 P(π πt) against the current policy πt whereas Nash-MD optimizes the preference π 7 P(π πµ t ) against the regularized policy πµ t . In the context of convex-concave games, the regret bound on the average cumulative regret translates into an upper bound on the exploitability of the game when players play their average policies, thus entailing their on-average convergence to the Nash equilibrium. However it is known that usual regret-minimization algorithms may not possess a last-iterate convergence property because the sequence of policies πt may oscillate around the Nash equilibrium (see, for example, Mertikopoulos et al., 2018). Nevertheless, last-iterate convergence have been obtained for variants of OMD, such as extra-gradient and optimistic versions, see e.g., (Rakhlin & Sridharan, 2013; Daskalakis & Panageas, 2019; Mertikopoulos et al., 2019; Munos et al., 2020; Mokhtari et al., 2020). In the context of LLMs, the MTPO algorithm of Shani et al. (2024), defined by the same update rule (8), is shown to converge, in the last-iterate, to the regularized Nash equilibrium, but with a speed that depends on the inverse minimum non-zero probability of the reference policy µmin = miny:µ(y)>0 µ(y). Also the on-line version of IPO (Calandriello et al., 2024) offers a deep-learning version of OMD similar to the update rule defined by Equation (8). To the best of our knowledge, it appears that Nash-MD has not been introduced before, despite its simplicity. Nash MD enjoys a last-iterate convergence property with a KLdivergence to the Nash equilibrium decaying as O(1/T) (with a constant in the big O notation independent of µmin). We believe the reason this simple modification of OMD possesses these nice properties is because of the special structure of the regularized preference function that we con- Nash Learning from Human Feedback sider here which is the sum of a bilinear function (in policy spaces) and a KL-penalty term. Comparison with Ruppert-Polyak averaging. The weighted averaging of log-probabilities in Equation (5), along with the updates in OMD and other algorithms for regret minimisation and learning in games (Cesa-Biachi & Lugosi, 2006), also bear relation with Ruppert-Polyak averaging (Ruppert, 1988; Polyak, 1990; Polyak & Juditsky, 1992), a general technique in stochastic approximation in which iterates are averaged to accelerate convergence. However, the specific form of averaging and its uses within Nash-MD (over log-probabilities, and for use in opponent policies) are essential for shaping the dynamics of the algorithm and establishing convergence, not just as a method for acceleration. The contextual bandit setting. All the results mentioned in this section are for the state-independent case, where policies and preferences do not depend on the context x. In the case of LLMs the context is the prompt x, and responses y and y are generated conditioned on x. However the theoretical results do not change. Indeed, we would define the Nash-MD algorithm in the contextual bandit case as follows: for every x supp(ρ), πt+1( |x) def = arg max π( ) ηt P(π( |x) πµ t ( |x)|x) KL(π( ), πµ t ( |x)) , where πµ t (y|x) πt(y|x)1 ηtτµ(y|x)ηtτ. We prove the convergence of this algorithm, in exactly the same way as in Theorem 1, by showing that at every iteration t we have KL(π τ, πt+1) (1 ηtτ)KL(π τ, πt) + 2η2 t , where KL(π, π ) = Ex ρ[KL(π( |x), π ( |x))]. 7. Deep learning implementation of NLHF We build upon the insights from Nash-MD and describe gradient-based algorithms Nash-MD-PG and Nash-EMAPG for deep-learning architectures designed for the computation of the Nash equilibrium of a preference model, with a specific focus on their applicability in the context of LLMs. The general form of the policy gradient is θ log πθ(y|x) P(y y |x) 1/2 τ log πθ(y|x) where the prompt x ρ, the first responses y πθ( |x) and the second response y π ( |x) where the alternative policy π is either the (geometric) mixture policy π (y|x) (πθ(y|x))1 β(µ(y|x))β between πθ and µ (for some mixture parameter β [0, 1]), in the case of the Nash-MD-PG algorithm, the policy with a parameter being an exponentially moving average of past parameters, in the case of the Nash EMA-PG algorithm. Notice we have subtracted the baseline 1/2 = P(y y|x) from the preference P(y y |x) (which does not change the expectation of the gradient) as a variance reduction technique that does not require learning a value function. All the details of these algorithms are given in Appendix F. 8. Experiments on a text summarization task In Appendix G we report experiments on a text summarization task and compare several algorithms for NLHF (Self Play, Best-Response against µ, Nash-MD-PG and Nash EMA-PG) as well as a RLHF baseline. We made a pairwise evaluation of all the models by querying a very large LLM (Pa LM 2 Large) (Anil et al., 2023) to obtain a preference signal, which is reported in Table 1. The models that are compared are respectively: SFT (Supervised Fined-Tuned, from which all other models are initialized and regularized to, i.e., this is also µ), RLHF, SP (Self-Play, equivalent to Nash-MD-PG with β = 0), MD1-MD6 (Nash MD-PG with β {0.125, 0.25, 0.375, 0.5, 0.625, 0.75}), BR (Best Response against SFT, equivalent to Nash-MD-PG with β = 1), EMA1-2 (last-iterate of Nash-EMA-PG with β {0.999, 0.9995}), and EMA1 2 (average policy of Nash-EMA-PG). See the full description in Appendix G. The Nash-MD-PG models (specially for β [0.125, 0.375]) emerge as the best-performing method, surpassing the other models in this pairwise comparison. The choice of the mixture parameter β in Nash-MD-PG entails an interesting trade-off (see the numbers highlighted in blue). A parameter value β = 0 corresponds to Self-Play, while a value β = 1 represents Best-Response against the initial policy (SFT). Notably, intermediate values within the range of 0.125 to 0.375 consistently outperform both Self-Play and Best-Response, highlighting the advantages of self-improvement when playing against a mixture policy (between self and a past version of self) as opposed to against a pure policy (either self or fixed). Notice that it is difficult to establish a fair comparison between the NLHF and RLHF approaches since they rely on different models: a preference model for NLHF versus a reward model for RLHF, and the quality of the learnt models should be part of the picture for a full comparison between the NLHF and RLHF approaches. Thus the goal of these experiments is not to show the superiority of a method over Nash Learning from Human Feedback Table 1. Pa LM 2 preference P (πc πr) model between column policy πc against row policy πr. P SFT RLHF SP MD1 MD2 MD3 MD4 MD5 MD6 BR EMA1 EMA2 EMA1* EMA2* SFT 0.500 0.990 0.983 0.982 0.989 0.987 0.985 0.982 0.965 0.943 0.970 0.961 0.977 0.980 RLHF 0.010 0.500 0.489 0.598 0.519 0.561 0.501 0.436 0.284 0.148 0.468 0.320 0.477 0.510 SP 0.017 0.511 0.500 0.592 0.504 0.545 0.499 0.451 0.310 0.211 0.445 0.362 0.464 0.488 MD1 0.018 0.402 0.408 0.500 0.425 0.470 0.369 0.362 0.238 0.163 0.391 0.270 0.400 0.447 MD2 0.011 0.481 0.496 0.575 0.500 0.513 0.491 0.434 0.298 0.196 0.460 0.351 0.430 0.496 MD3 0.013 0.439 0.455 0.530 0.487 0.500 0.484 0.408 0.273 0.187 0.429 0.323 0.413 0.472 MD4 0.015 0.499 0.501 0.631 0.509 0.516 0.500 0.428 0.265 0.161 0.468 0.358 0.437 0.503 MD5 0.018 0.564 0.549 0.638 0.566 0.592 0.572 0.500 0.329 0.210 0.532 0.389 0.518 0.539 MD6 0.035 0.716 0.690 0.762 0.702 0.727 0.735 0.671 0.500 0.342 0.652 0.548 0.651 0.691 BR 0.057 0.852 0.789 0.837 0.804 0.813 0.839 0.790 0.658 0.500 0.743 0.640 0.752 0.774 EMA1 0.030 0.532 0.555 0.609 0.540 0.571 0.532 0.468 0.348 0.257 0.500 0.381 0.480 0.556 EMA2 0.039 0.680 0.638 0.730 0.649 0.677 0.642 0.611 0.452 0.360 0.619 0.500 0.585 0.659 EMA1* 0.023 0.523 0.536 0.600 0.570 0.587 0.563 0.482 0.349 0.248 0.520 0.415 0.500 0.555 EMA2* 0.020 0.490 0.512 0.553 0.504 0.528 0.497 0.461 0.309 0.226 0.444 0.341 0.445 0.500 another one (this would also require a more intensive and larger scale empirical evaluation) but rather to illustrate how the proposed NLHF approach, and in particular the Nash MD algorithm, can be implemented in a practical LLM setting. This is also the reason we have not tried to over-optimize the hyper-parameters (such as the learning rate, the number of learning steps, etc) of the different methods (and we used the same parameters for all NLHF algorithms) and this could explains some differences observed in the pairwise preference Table compared with the results reported in (Calandriello et al., 2024), in which they optimize each algorithm with a different set of hyper-parameters. 9. Conclusion and future work NLHF emerges as an interesting and promising alternative to RLHF, offering a fresh perspective on aligning models with human preferences. Given human pairwise preference data, learning a preference model is a more intuitive and natural approach compared to learning a reward model. It involves simpler techniques, such as supervised learning, and doesn t requires making specific assumptions, such as Bradley-Terry. Once a preference model is established, the concept of the Nash equilibrium naturally arises as a compelling solution concept. For this new NLHF framework we have introduced Nash MD, an algorithm that optimizes policies by following a self-improvement technique where the current model improves itself by playing (i.e., by generating and comparing responses) against a (geometric) mixture of the current model and a past model. The parameter β of the mixture ranges from β = 0 (which corresponds to playing against itself) to β = 1 (playing against a fixed policy) and may take any value in between. In the case of tabular policy representations we have estab- lish its last-iterate convergence to the Nash equilibrium of the regularized preference model. We have also introduced and implemented deep learning versions Nash-MD-PG and Nash-EMA-PG inspired by Nash-MD, and described how these ideas can be applied to LLMs by reported experimental results on a text-summarizing task. Future research directions would consider the exploration of various mixtures between the current policy and past checkpoints, extending the concept initially introduced by Nash-MD. Additionally, another immediate direction would consider incorporating a decaying mixing coefficient β 0 to the deep Nash-MD variants to align more closely with theoretical considerations. In conclusion, NLHF offers a compelling avenue for preference learning and policy optimization for aligning models with human preferences. An an example of a possible NLHF implementation we have introduced Nash-MD as a possible algorithmic solution and described some theoretical properties as well as deep learning adaptations. Further research in this direction, including the use of different mixture strategies, holds significant promise for advancing the field of aligning LLMs with human preferences. Nash Learning from Human Feedback Impact statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Acknowledgements We would like to thank the individuals who designed and built the RL training infrastructure used in this paper: Eugene Tarassov, L eonard Hussenot, Johan Ferret, Robert Dadashi, Geoffrey Cideron, Alexis Jacq, Sabela Ramos, Piotr Stanczyk, Danila Sinopalnikov, Am elie H eliou, Ruba Haroun, Matt Hoffman, Bobak Shahriari, and in particular Olivier Pietquin for motivating discussions. We would like to express our gratitude to Ivo Danihelka, David Silver, Guillaume Desjardins, Tor Lattimore, and Csaba Szepesv ari for their feedback on this work. Finally we would like to thank the anonymous reviewers who helped us improve the quality of this final version. Abernethy, J., Lai, K. A., Levy, K. Y., and Wang, J.-K. Faster rates for convex-concave games. In Proceedings of the Annual Conference on Learning Theory, 2018. Akrour, R., Schoenauer, M., and Sebag, M. Preferencebased policy learning. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2011. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Man e, D. Concrete problems in AI safety. ar Xiv, 2016. Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, Z., Chu, E., Clark, J. H., Shafey, L. E., Huang, Y., Meier Hellstern, K., Mishra, G., Moreira, E., Omernick, M., Robinson, K., Ruder, S., Tay, Y., Xiao, K., Xu, Y., Zhang, Y., Abrego, G. H., Ahn, J., Austin, J., Barham, P., Botha, J., Bradbury, J., Brahma, S., Brooks, K., Catasta, M., Cheng, Y., Cherry, C., Choquette-Choo, C. A., Chowdhery, A., Crepy, C., Dave, S., Dehghani, M., Dev, S., Devlin, J., D ıaz, M., Du, N., Dyer, E., Feinberg, V., Feng, F., Fienber, V., Freitag, M., Garcia, X., Gehrmann, S., Gonzalez, L., Gur-Ari, G., Hand, S., Hashemi, H., Hou, L., Howland, J., Hu, A., Hui, J., Hurwitz, J., Isard, M., Ittycheriah, A., Jagielski, M., Jia, W., Kenealy, K., Krikun, M., Kudugunta, S., Lan, C., Lee, K., Lee, B., Li, E., Li, M., Li, W., Li, Y., Li, J., Lim, H., Lin, H., Liu, Z., Liu, F., Maggioni, M., Mahendru, A., Maynez, J., Misra, V., Moussalem, M., Nado, Z., Nham, J., Ni, E., Nystrom, A., Parrish, A., Pellat, M., Polacek, M., Polozov, A., Pope, R., Qiao, S., Reif, E., Richter, B., Riley, P., Ros, A. C., Roy, A., Saeta, B., Samuel, R., Shelby, R., Slone, A., Smilkov, D., So, D. R., Sohn, D., Tokumine, S., Valter, D., Vasudevan, V., Vodrahalli, K., Wang, X., Wang, P., Wang, Z., Wang, T., Wieting, J., Wu, Y., Xu, K., Xu, Y., Xue, L., Yin, P., Yu, J., Zhang, Q., Zheng, S., Zheng, C., Zhou, W., Zhou, D., Petrov, S., and Wu, Y. Pa LM 2 technical report, 2023. Azar, M. G., Rowland, M., Piot, B., Guo, D., Calandriello, D., Valko, M., and Munos, R. A general theoretical paradigm to understand learning from human preferences. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023. Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T., Clark, J., Mc Candlish, S., Olah, C., Mann, B., and Kaplan, J. Training a helpful and harmless assistant with reinforcement learning from human feedback. ar Xiv, 2022. Bertrand, Q., Czarnecki, W. M., and Gidel, G. On the limitations of the Elo: Real-world games are transitive, not additive. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023. Bradley, R. A. and Terry, M. E. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324 345, 1952. Brown, G. W. Iterative solution of games by fictitious play. Act. Anal. Prod Allocation, 13(1):374, 1951. Bubeck, S. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4): 231 357, 2015. Busa-Fekete, R., Sz orenyi, B., Weng, P., Cheng, W., and H ullermeier, E. Preference-based evolutionary direct policy search. In Autonomous Learning Workshop at the IEEE International Conference on Robotics and Automation, 2013. Busa-Fekete, R., Sz or enyi, B., Weng, P., Cheng, W., and H ullermeier, E. Preference-based reinforcement learning: Evolutionary direct policy search using a preferencebased racing algorithm. Machine Learning, 97(3):327 351, 2014. Busbridge, D., Ramapuram, J., Ablin, P., Likhomanenko, T., Dhekane, E. G., Suau, X., and Webb, R. How to scale your EMA. In Advances in Neural Information Processing Systems, 2023. Nash Learning from Human Feedback Calandriello, D., Guo, D., Munos, R., Rowland, M., Tang, Y., Pires, B. A., Richemond, P. H., Lan, C. L., Valko, M., Liu, T., Joshi, R., Zheng, Z., and Piot, B. Human alignment of large language models through online preference optimisation. ar Xiv, 2024. Cesa-Biachi, N. and Lugosi, G. Predition, Learning, and Games. Cambridge University Press, 2006. Chen, X., Zhong, H., Yang, Z., Wang, Z., and Wang, L. Human-in-the-loop: Provably efficient preference-based reinforcement learning with general function approximation. In Proceedings of the International Conference on Machine Learning, 2022. Cheng, W., F urnkranz, J., H ullermeier, E., and Park, S.-H. Preference-based policy iteration: Leveraging preference learning for reinforcement learning. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2011. Christiano, P., Leike, J., Brown, T. B., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, 2017. Clopper, C. J. and Pearson, E. S. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26(4):404 413, 1934. Csiszar, I. and Korner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems. Academic Press, Inc., 1982. Daskalakis, C. and Panageas, I. Last-iterate convergence: Zero-sum games and constrained min-max optimization. In Proceedings of the Conference on Innovations in Theoretical Computer Science, 2019. Daskalakis, C., Deckelbaum, A., and Kim, A. Near-optimal no-regret algorithms for zero-sum games. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 2011. Dong, H., Xiong, W., Goyal, D., Zhang, Y., Chow, W., Pan, R., Diao, S., Zhang, J., SHUM, K., and Zhang, T. RAFT: Reward r Anked Fine Tuning for generative foundation model alignment. Transactions on Machine Learning Research, 2023. Efroni, Y., Merlis, N., and Mannor, S. Reinforcement learning with trajectory feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021. Elo, A. E. The Rating of Chessplayers, Past and Present. Arco Pub., 1978. Farina, G., Kroer, C., and Sandholm, T. Optimistic regret minimization for extensive-form games via dilated distance-generating functions. In Advances in Neural Information Processing Systems, 2019. Fudenberg, D. and Levine, D. K. The Theory of Learning in Games. MIT Press, 1998. Gardner, M. The paradox of the nontransitive dice. Scientific American, (223):110 111, 1970. Geist, M., Scherrer, B., and Pietquin, O. A theory of regularized Markov decision processes. In Proceedings of the International Conference on Machine Learning, 2019. Gidel, G., Jebara, T., and Lacoste-Julien, S. Frank-Wolfe algorithms for saddle point problems. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2016. Glaese, A., Mc Aleese, N., Trebacz, M., Aslanides, J., Firoiu, V., Ewalds, T., Rauh, M., Weidinger, L., Chadwick, M., Thacker, P., Campbell-Gillingham, L., Uesato, J., Huang, P.-S., Comanescu, R., Yang, F., See, A., Dathathri, S., Greig, R., Chen, C., Fritz, D., Elias, J. S., Green, R., Mokr a, S., Fernando, N., Wu, B., Foley, R., Young, S., Gabriel, I., Isaac, W., Mellor, J., Hassabis, D., Kavukcuoglu, K., Hendricks, L. A., and Irving, G. Improving alignment of dialogue agents via targeted human judgements. ar Xiv, 2022. Grill, J.-B., Strub, F., Altch e, F., Tallec, C., Richemond, P. H., Buchatskaya, E., Doersch, C., Pires, B. A., Guo, Z. D., Azar, M. G., Piot, B., Kavukcuoglu, K., Munos, R., and Valko, M. Bootstrap your own latent: A new approach to self-supervised learning. In Advances in Neural Information Processing Systems, 2020. Gulcehre, C., Paine, T. L., Srinivasan, S., Konyushkova, K., Weerts, L., Sharma, A., Siddhant, A., Ahern, A., Wang, M., Gu, C., Macherey, W., Doucet, A., Firat, O., and de Freitas, N. Reinforced self-training (Re ST) for language modeling, 2023. Heinrich, J. and Silver, D. Deep reinforcement learning from self-play in imperfect-information games. ar Xiv, 2016. Heinrich, J., Lanctot, M., and Silver, D. Fictitious selfplay in extensive-form games. In Proceedings of the International Conference on Machine Learning, 2015. Hennes, D., Morrill, D., Omidshafiei, S., Munos, R., Perolat, J., Lanctot, M., Gruslys, A., Lespiau, J. B., Parmas, P., Du e nez-Guzm an, E., and Tuyls, K. Neural replicator dynamics: Multiagent learning via hedging policy gradients. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, 2020. Nash Learning from Human Feedback Hoda, S., Gilpin, A., Pe na, J., and Sandholm, T. Smoothing techniques for computing Nash equilibria of sequential games. Mathematics of Operations Research, 35(2):494 512, 2010. Hofbauer, J. and Sorin, S. Best response dynamics for continuous zero-sum games. Discrete and Continuous Dynamical Systems Series B, 6(1):215, 2006. Jacob, A. P., Farina, G., and Andreas, J. Regularized conventions: Equilibrium computation as a model of pragmatic reasoning. ar Xiv, 2023a. Jacob, A. P., Shen, Y., Farina, G., and Andreas, J. The consensus game: Language model generation via equilibrium search, 2023b. Jaques, N., Ghandeharioun, A., Shen, J. H., Ferguson, C., Lapedriza, A., Jones, N., Gu, S., and Picard, R. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. ar Xiv, 2019. Kangarshahi, E. A., Hsieh, Y.-P., Sahin, M. F., and Cevher, V. Let s be honest: An optimal no-regret framework for zero-sum games. In Proceedings of the International Conference on Machine Learning, 2018. Klimenko, A. Y. Intransitivity in theory and in the real world. Entropy, 17(6):4364 4412, 2015. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon, 12:747 756, 1976. Lattimore, T. and Szepesv ari, C. Bandit Algorithms. Cambridge University Press, 2020. Lee, H., Phatale, S., Mansoor, H., Lu, K., Mesnard, T., Bishop, C., Carbune, V., and Rastogi, A. RLAIF: Scaling reinforcement learning from human feedback with AI feedback. ar Xiv, 2023. Menick, J., Trebacz, M., Mikulik, V., Aslanides, J., Song, F., Chadwick, M., Glaese, M., Young, S., Campbell Gillingham, L., Irving, G., and Mc Aleese, N. Teaching language models to support answers with verified quotes. ar Xiv, 2022. Mertikopoulos, P., Papadimitriou, C., and Piliouras, G. Cycles in adversarial regularized learning. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 2018. Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C., Chandrasekhar, V., and Piliouras, G. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile. In Proceedings of the International Conference on Learning Representations, 2019. Mokhtari, A., Ozdaglar, A., and Pattathil, S. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2020. Moskovitz, T., Singh, A. K., Strouse, D., Sandholm, T., Salakhutdinov, R., Dragan, A. D., and Mc Aleer, S. Confronting reward model overoptimization with constrained RLHF. ar Xiv, 2023. Munos, R., Perolat, J., Lespiau, J.-B., Rowland, M., De Vylder, B., Lanctot, M., Timbers, F., Hennes, D., Omidshafiei, S., Gruslys, A., Azar, M. G., Lockhart, E., and Tuyls, K. Fast computation of Nash equilibria in imperfect information games. In Proceedings of the International Conference on Machine Learning, 2020. Nakano, R., Hilton, J., Balaji, S., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., Jiang, X., Cobbe, K., Eloundou, T., Krueger, G., Button, K., Knight, M., Chess, B., and Schulman, J. Web GPT: Browser-assisted question-answering with human feedback. ar Xiv, 2021. Nemirovski, A. and Yudin, D. Problem complexity and method efficiency in optimization. Wiley-Interscience Series in Discrete Mathematics, 1983. Nesterov, Y. Excessive gap technique in nonsmooth convex minimization. SIAM Journal on Optimization, 16(1): 235 249, 2005. Novoseller, E., Wei, Y., Sui, Y., Yue, Y., and Burdick, J. Dueling posterior sampling for preference-based reinforcement learning. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2020. Open AI. Introducing Chat GPT, 2022. URL https:// openai.com/blog/chatgpt. Open AI. GPT-4 technical report. ar Xiv, 2023. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. Training language models to follow instructions with human feedback. ar Xiv, 2022. Pacchiano, A., Saha, A., and Lee, J. Dueling RL: Reinforcement learning with trajectory preferences. ar Xiv, 2023. Perolat, J., Munos, R., Lespiau, J.-B., Omidshafiei, S., Rowland, M., Ortega, P., Burch, N., Anthony, T., Balduzzi, D., De Vylder, B., Piliouras, G., Lanctot, M., and Tuyls, K. From Poincar e recurrence to convergence in imperfect Nash Learning from Human Feedback information games: Finding equilibrium via regularization. In Proceedings of the International Conference on Machine Learning, 2021. Perolat, J., Vylder, B. D., Hennes, D., Tarassov, E., Strub, F., de Boer, V., Muller, P., Connor, J. T., Burch, N., Anthony, T., Mc Aleer, S., Elie, R., Cen, S. H., Wang, Z., Gruslys, A., Malysheva, A., Khan, M., Ozair, S., Timbers, F., Pohlen, T., Eccles, T., Rowland, M., Lanctot, M., Lespiau, J.-B., Piot, B., Omidshafiei, S., Lockhart, E., Sifre, L., Beauguerlange, N., Munos, R., Silver, D., Singh, S., Hassabis, D., and Tuyls, K. Mastering the game of Stratego with model-free multiagent reinforcement learning. Science, 378(6623):990 996, 2022. Polyak, B. T. New stochastic approximation type procedures. Automat. i Telemekh, 7(98-107):2, 1990. Polyak, B. T. and Juditsky, A. B. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838 855, 1992. Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C. D., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems, 2023. Rakhlin, S. and Sridharan, K. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, 2013. Rame, A., Couairon, G., Shukor, M., Dancette, C., Gaya, J.-B., Soulier, L., and Cord, M. Rewarded soups: Towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. In Advances in Neural Information Processing Systems, 2023. Roberts, A., Chung, H. W., Levskaya, A., Mishra, G., Bradbury, J., Andor, D., Narang, S., Lester, B., Gaffney, C., Mohiuddin, A., Hawthorne, C., Lewkowycz, A., Salcianu, A., van Zee, M., Austin, J., Goodman, S., Soares, L. B., Hu, H., Tsvyashchenko, S., Chowdhery, A., Bastings, J., Bulian, J., Garcia, X., Ni, J., Chen, A., Kenealy, K., Clark, J. H., Lee, S., Garrette, D., Lee-Thorp, J., Raffel, C., Shazeer, N., Ritter, M., Bosma, M., Passos, A., Maitin Shepard, J., Fiedel, N., Omernick, M., Saeta, B., Sepassi, R., Spiridonov, A., Newlan, J., and Gesmundo, A. Scaling up models and data with t5x and seqio. Journal of Machine Learning Research, 24(377):1 8, 2023. Robinson, J. An iterative method of solving a game. Annals of Mathematics, 54(2):296 301, 1951. Roit, P., Ferret, J., Shani, L., Aharoni, R., Cideron, G., Dadashi, R., Geist, M., Girgin, S., Hussenot, L., Keller, O., Momchev, N., Ramos, S., Stanczyk, P., Vieillard, N., Bachem, O., Elidan, G., Hassidim, A., Pietquin, O., and Szpektor, I. Factually consistent summarization via reinforcement learning with textual entailment feedback. In Proceedings of the Annual Meeting of the Associating for Computational Linguistics, 2023. Rosen, J. B. Existence and uniqueness of equilibrium points for concave n-person games. Econometrica: Journal of the Econometric Society, pp. 520 534, 1965. Ruppert, D. Efficient estimations from a slowly convergent Robbins Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. ar Xiv, 2017. Shani, L., Rosenberg, A., Cassel, A., Lang, O., Calandriello, D., Zipori, A., Noga, H., Keller, O., Piot, B., Szpektor, I., Hassidim, A., Matias, Y., and Munos, R. Multi-turn reinforcement learning from preference human feedback. ar Xiv, 2024. Sion, M. On general minimax theorems. Pacific Journal of mathematics, 8(1):171 176, 1958. Sokota, S., D Orazio, R., Kolter, J. Z., Loizou, N., Lanctot, M., Mitliagkas, I., Brown, N., and Kroer, C. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. In Proceedings of the International Conference on Learning Representations, 2023. Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F. Learning to summarize with human feedback. In Advances in Neural Information Processing Systems, 2020. Syrgkanis, V., Agarwal, A., Luo, H., and Schapire, R. E. Fast convergence of regularized learning in games. In Advances in Neural Information Processing Systems, 2015. Tang, Y., Guo, Z. D., Zheng, Z., Calandriello, D., Munos, R., Rowland, M., Richemond, P. H., Valko, M., Avila Pires, B., and Piot, B. Generalized preference optimization: A unified approach to offline alignment, 2024. Tversky, A. Intransitivity of preferences. Psychological Review, 76(1):31 48, 1969. V olske, M., Potthast, M., Syed, S., and Stein, B. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization. Association for Computational Linguistics, 2017. von Neumann, J. Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1):295 320, 1928. Nash Learning from Human Feedback Wang, Y., Liu, Q., and Jin, C. Is RLHF more difficult than standard RL? In Advances in Neural Information Processing Systems, 2023. Wilson, A., Fern, A., and Tadepalli, P. A Bayesian approach for policy learning from trajectory preference queries. In Advances in Neural Information Processing Systems, 2012. Wirth, C., Akrour, R., Neumann, G., and F urnkranz, J. A survey of preference-based reinforcement learning methods. Journal of Machine Learning Research, 18(136): 1 46, 2017. Wortsman, M., Ilharco, G., Gadre, S. Y., Roelofs, R., Gontijo-Lopes, R., Morcos, A. S., Namkoong, H., Farhadi, A., Carmon, Y., Kornblith, S., and Schmidt, L. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In Proceedings of the International Conference on Machine Learning, 2022. Zhao, Y., Joshi, R., Liu, T., Khalman, M., Saleh, M., and Liu, P. J. SLi C-HF: Sequence likelihood calibration with human feedback. In Proceedings of the International Conference on Learning Representations, 2023. Zinkevich, M., Johanson, M., Bowling, M., and Piccione, C. Regret minimization in games with incomplete information. In Advances in Neural Information Processing Systems, 2007. Nash Learning from Human Feedback A. Maximizing expected Elo vs maximizing probability of winning Consider the following preference model, where the set of actions is Y = {y1, y2, y3} and the preference table between these actions is P(y y ) y = y1 y = y2 y = y3 y = y1 1/2 9/10 2/3 y = y2 1/10 1/2 2/11 y = y3 1/3 9/11 1/2 This preference model can be perfectly captured by a Bradley-Terry reward model in which the Elo score of the actions would be (up to an additive constant): R(y1) = 0, R(y2) = log 9, and R(y3) = log 2. If we optimize over the simplex (Y), then the policy selecting deterministically y2 is optimal both in terms of rewards and in terms of preference against any policy. However, if we consider a constrained optimization problem where we search for a policy in a subset S (Y), then the optimum of the expected reward and preference may be different. To illustrate, let S be the set of probability distributions π (Y) such that π(y1) = 2π(y2). In that case, the policy π R def = (2/3, 1/3, 0) is optimal in terms of maximizing expected rewards whereas the policy π P def = (0, 0, 1) is optimal in terms of maximizing preference against any alternative policy in S. In particular we have Ey π R[R(y)] = 0 2/3 + log(9) 1/3 > log(2) = Ey π P[R(y)], whereas policy π P is preferred over π R, since P(π P π R) = P(y3 y1) 2/3 + P(y3 y2) 1/3 = 50/99 > 1/2. Thus if one searches for a policy in S, then the optimum in terms of maximizing expected (Elo) reward and maximizing preference (probability of winning) are different. Note that the constraint π S may be imposed in a soft way using regularization. Here for example we could implement a 2-step decisions process where in a first step one would choose the probability mass assigned to y3, and in the second step, one would choose the remaining mass to allocate between y1 and y2. The second step may be constrained in a soft way by penalizing distributions (over y1 and y2) that are different from a reference distribution µ = (2/3, 1/3) by using a KL-regularization with a large τ coefficient. In this way the set of effective policies that would be considered would be close to S. This example illustrates the fact that in constrained (or regularized) optimization settings, maximizing Elo versus preference are different objectives, even in a setting where preferences can be perfectly expressed in a Bradley-Terry model. B. Sensitivity of reward models w.r.t. the sampling distribution Proposition 2. For a given preference model P(y y ) and a distribution π, let us define the best Bradley-Terry reward model: rπ def = arg max r E y, y π( ) Z ν log σ(r(y Z w) r(y Z l )) (9) where y Z w and y Z l are respectively the preferred (and less preferred) response (among y and y sampled from π( )) according to a randomly sampled human Z ν. Define the corresponding Bradley-Terry preference model Pπ BT (y y ) def = σ(rπ(y) rπ(y )). Then we have that for any y in the support of π, P(y π) = Pπ BT (y π). Proof. First notice that, from the definition of y Z w and y Z l , we have rπ = arg max r Ey,y π [P(y y ) log σ(r(y) r(y ))] . Nash Learning from Human Feedback Write L(r) the loss that is minimized. Thus rπ = arg maxr L(r), and L(r) = Ey,y π P(y y ) σ(r(y) r(y )) σ(r(y) r(y )) = Ey,y π [P(y y )σ(r(y ) r(y))( r(y) r(y ))] Thus looking a the z-th element of L(rπ): r(z)L(rπ) = π(z)Ey π [P(z y)σ(rπ(y) rπ(z)) (1 P(z y)(1 σ(rπ(y) rπ(z)))] = π(z)Ey π [P(z y) + Pπ BT (y z) 1] = π(z) [P(z π) Pπ BT (z π)] . Setting the derivative r(z)L(rπ) = 0 we deduce that for z in the support of π we have P(z π) = Pπ BT (z π). Theorem 2 (The optimal BT-reward model depends on the sampling distribution). If a preference model P cannot be perfectly captured by a Bradley-Terry reward model, in the sense that the preference model Pπ BT (y y ) def = σ (rπ(y) rπ(y )) corresponding to the best Bradley-Terry reward model rπ, solution to Equation (9) for some policy π, is not identical to P, then the reward model rπ depends explicitly on the sampling distribution π. More precisely, if there exists y, y and π such that Pπ BT (y y ) = P(y y ), then there exists another policy π = π (with same support as π) such that rπ(y) rπ(y ) = rπ (y) rπ (y ). Thus we also have that Pπ BT (y y ) = Pπ BT (y y ). This result shows that the reward model rπ (thus the corresponding Bradley-Terry preference model Pπ BT ) depends on the sampling distribution π (which explains our use of π as superscript). Proof. Assume there exists y, y and π such that Pπ BT (y y ) = P(y y ). Let us define π (with same support as π) as follows: π (z) = 1 2π(z) for z = y , and π (y ) = cπ(y ) for some constant c > 1 (defined such that π is a proper probability distribution). We deduce that Q(y π ) = 1 2Q(y π) + (c 1/2)π(y )Q(y y ), for any preference model Q. Applying this equality both with Q = P and Q = Pπ BT , and since Pπ BT (y y ) = P(y y ) and Pπ BT (y π) = P(y π) (from Proposition 2), we deduce that Pπ BT (y π ) = P(y π ). Applying Proposition 2 again we have that P(y π ) = Pπ BT (y π ), thus Pπ BT (y π ) = Pπ BT (y π ). We deduce that X z π (z) h σ rπ(y) rπ(z) σ rπ (y) rπ (z) i = 0, thus there exists (at lease one) z such that rπ(y) rπ(z) = rπ (y) rπ (z). This concludes the proof that the two BT-reward models rπ and rπ are different as well as the corresponding BT-preference models Pπ BT and Pπ BT . C. Preference models may be non-transitive C.1. Example of a non-transitive preference model Notice that in general a preference model may not be transitive. Here is a simple illustration of a non-transitive preference model where we have 3 policies π1, π2 and π3 such that P(π1 π2) > 1/2, P(π2 π3) > 1/2 and P(π3 π1) > 1/2. We consider the set of outcomes being the subset of integers Y = {1, 2, . . . , 9} and the 3 policies being defined by π1 = U({2, 4, 9}), π2 = U({1, 6, 8}), and π3 = U({3, 5, 7}), where U(S) refers to a uniform distribution over the set S. The preference is defined as P(π π ) = Ey π,y π [I{y y }]. Then we have P(π1 π2) = P(π2 π3) = P(π3 π1) = 5/9. This mirrors the classical example of non-transitive dice (Gardner, 1970). Nash Learning from Human Feedback C.2. Non-transitive aggregation of individual transitive preferences Here we show that even if each human has transitive individual preferences, the resulting average preference model may not be transitive. Let us consider a specific case of a preference model defined as the probability (under some random outcome Z) that f(x, y, Z) f(x, y , Z), where f is a (deterministic) absolute scoring function: P(y y |x) = EZ ν [I{f(x, y, Z) f(x, y , Z)}] , where we define the function I{u v} def = (sign(u v) + 1)/2, which behaves as an indicator for the event u > v, and assigning a value of 1/2 in the case where u = v. For example, this could represent the probability that a randomly selected human Z ν prefers choice y over choice y in a context x. Consider the following example, where there are 3 possible responses y1, y2, y3 and 3 possible humans z1, z2, z3 chosen uniformly at random: ν = U({z1, z2, z3}). Define the scoring function f as follows: f(y1, z1) = 2, f(y1, z2) = 4, f(y1, z3) = 9, f(y2, z1) = 1, f(y2, z2) = 6, f(y2, z3) = 8, f(y3, z1) = 3, f(y3, z2) = 5, f(y3, z3) = 7. Notice that this defines a transitive preference model for each individual human z {z1, z2, z3}. However, when aggregated the preference model satisfies P(y1 y2) = P(y2 y3) = P(y3 y1) = 2/3. This example thus illustrates that even if for each individual, preferences are totally ordered, when averaged over humans, the resulting preferences model may be non-transitive. D. Proof of Theorem 1 We start with a first lemma. Lemma 1. For any π, and 0 ηtτ 1, we have KL(π, πµ t ) ηtτKL(π, µ) + (1 ηtτ)KL(π, πt) ηtτKL(πµ t , µ). Proof. From the definition of πµ t , we have log πµ t (y) = (1 ηtτ) log πt(y) + ηtτ log µ(y) log Z, where we define Z = P y (πt(y ))1 ηtτ(µ(y ))ηtτ. Thus, for any π, we have KL(π, πµ t ) = ηtτKL(π, µ) + (1 ηtτ)KL(π, πt) + log Z. We have that ηtτKL(πµ t , µ) = ηtτ X y πµ t (y) log πt(y) 1 ηtτ µ(y) ηtτ = (1 ηtτ) X y πµ t (y) log πt(y) ηtτ µ(y) ηtτ ηtτ log Z (1 ηtτ) log X πt(y) ηtτ µ(y) ηtτ ηtτ log Z = (1 ηtτ) log X πt(y) 1 ηtτ µ(y) ηtτ πt(y) ηtτ µ(y) ηtτ ηtτ log Z where we used Jensen s inequality applied with the concave logarithmic function. We deduce KL(π, πµ t ) ηtτKL(π, µ) + (1 ηtτ)KL(π, πt) ηtτKL(πµ t , µ). Nash Learning from Human Feedback Now we use Lemma 7 of (Munos et al., 2020), restated below with notation. Lemma 2. Let p 1 and q 1 such that 1/p + 1/q = 1. Let φ be a strongly convex function with respect to the ℓp-norm p with some modulus σ, i.e., for any π, π , φ(π) φ(π ) + φ(π ) (π π ) + σ Write Dφ the associated Bregman divergence: for π, π , Dφ(π, π ) def = φ(π) φ(π ) φ(π ) (π π ). Let δ be a vector of dimension |Y|. For any π (Y), define π+ as π+ = arg max π (Y) y π(y)δ(y) Dφ(π, π ) Then for any π (Y), we have, Dφ(π, π+) Dφ(π, π ) + X y (π (y) π(y))δ(y) + (2/σ) δ 2 q. We apply this lemma with π+ = πt+1 and π = πµ t , with the vector δ(y) = ηt P(y πµ t ), and as Bregman divergence Dφ we choose the KL (which corresponds to the choice of the entropy regularizer φ(π) = P y π(y) log π(y)). For p = 1, q = , the regularizer φ is a strongly convex function with respect to the ℓ1-norm with a modulus σ = 1; this is a consequence of Pinsker s inequality, see (Csiszar & Korner, 1982). We deduce that for any π, KL(π, πt+1) KL(π, πµ t ) + ηt X y (πµ t (y) π(y))P(y πµ t ) + 2η2 t . (11) For the choice π = π τ and using the previous lemma, we have KL(π τ, πt+1) KL(π τ, πµ t ) + ηt X y (πµ t (y) π τ(y))P(y πµ t ) + 2η2 t (1 ηtτ)KL(π τ, πt) + ηtτ (KL(π τ, µ) KL(πµ t , µ)) +ηt (P(πµ t πµ t ) P(π τ πµ t )) + 2η2 t = (1 ηtτ)KL(π τ, πt) + ηt [1/2 P(π τ πµ t ) + τKL(π τ, µ) τKL(πµ t , µ)] + 2η2 t = (1 ηtτ)KL(π τ, πt) + ηt [1/2 Pτ(π τ πµ t )] + 2η2 t (1 ηtτ)KL(π τ, πt) + 2η2 t , where the last inequality comes from the fact that π τ is the Nash of the regularized game Pτ: Pτ(π τ πµ t ) Pτ(π τ π τ) = 1/2 and the last equality comes from the definition of the regularized preference. This inequality with ηt = 2/(τ(t + 2)) applied to t = 0 gives KL(π τ, π1) 2 Then, by induction, assuming KL(π τ, πt) 8 τ 2(t+1), KL(π τ, πt+1) 1 2 t + 2 8 τ 2(t + 1) + 8 τ 2(t + 2)2 1 2 t + 2 + 1 t + 2 8 τ 2(t + 1) = 8 τ 2(t + 2). Nash Learning from Human Feedback E. Proof of Proposition 1 The mappings π 7 P(π π ) and π 7 P(π π ) are linear in π (respectively in π ) thus π 7 Pτ(π π ) is concave and π 7 Pτ(π π ) is convex. Existence of a Nash equilibrium is derived from the minimax theorem for convex-concave functions (Sion, 1958). The uniqueness of the Nash equilibrium essentially relies on the strict convexity/concavity of these mappings. We now give a proof of existence and uniqueness using variational inequalities. We first note that since Pτ(π π) = 1 Pτ(π π ) we can re-express the minimax game of Eq. 2 as an antisymmetric two-player game with payoffs of policy π and π are defined as R(π; π ) = P(π π ) τKLρ(π, µ) and R(π ; π) = P(π π) τKLρ(π , µ) respectively. First we notice that since the payoff of this game is concave in π and π , it possesses a Nash equilibrium (Rosen, 1965, Theorem 1). To show that this game has unique Nash equilibrium we need to show that its corresponding variational inequality is strictly monotone (Rosen, 1965, Theorem 2). Let π = [π, π ] and v( π) = [ πR(π; π ), π R(π ; π)]. Then every Nash equilibrium of the game should satisfy the following variational inequality for all π: v T ( π )( π π) 0 Furthermore the variational inequality is strictly monotone if and only if for every π1 and π2 we have that (v( π1) v( π2))T ( π1 π2) 0 (12) with equality only holds at π1 = π2 (Rosen, 1965, Theorem 2). We can show this inequality holds by expanding the terms on LHS. For every context x let denote v( π)(x) as the partial derivative v( π) for x. We have: v( π)(x) = ρ(x)[P(y π |x) τ log(π/µ|x) 1, P(y π|x) τ log(π /µ|x) 1], where P(y π |x) = [p(yi π|x)]i=1:N and log(π/µ|x) = [log(π(yi|x)/µ(yi|x))]i=1:N, in which N is the size of the generation set. Plugging this in the LHS of Eq.12 and then exploiting the non-negativity of KL-divergence implies: (v( π1) v( π2))T ( π1 π2) = P(π1 π 1) + P(π 1 π1) + P(π2 π 2) + P(π 2 π2) | {z } =2 (P(π2 π 1) + P(π 1 π2) + P(π1 π 2) + P(π 2 π1)) | {z } =2 τ(KLρ(π1||π2) + KLρ(π2||π1) + KLρ(π 1||π 2) + KLρ(π 2||π 1)) = τ(KLρ(π1||π2) + KLρ(π2||π1) + KLρ(π 1||π 2) + KLρ(π 2||π 1)) 0 with equality only at π1 = π2. F. Deep Learning Implementation of NLHF Now, building upon the insights from Nash-MD, we explore potential gradient-based algorithms for deep-learning architectures designed for the computation of the Nash equilibrium of a preference model, with a specific focus on their applicability in the context of LLMs. Nash Learning from Human Feedback F.1. Generating one token at the time, instead of a full sequence In LLMs it is usually the case that tokens are generated one at a time in an autoregressive manner. Thus the response y π( |x) can be written as y = y0:N (where y0:N def = (y0, . . . , y N)), where each token yn is generated from a distribution π( |x, y0:n 1) conditioned on previous tokens, such that π(y0:N|x) = QN n=0 π(yn|x, y0:n 1). In practice (see the experiments section for results on LLMs) we will implement this token-per-token autoregressive generation of responses y π( |x) using next token distributions (implemented as a softmax over logits). Now consider a parametric policy πθ. Nash-MD requires the generation of alternative responses y πβ θ (y|x) sampled from the regularized policy πβ θ (y|x) (πθ(y|x))1 β(µ(y|x))β which is defined like in Equation (3) as a geometric mixture between the current policy πθ and the reference policy µ. However it is not easy to generate a sequence y from this distribution by sampling one token yn at a time. In particular, since πβ θ is not a simple (arithmetic) mixture, we cannot select one policy πθ or µ according to some prior probability (that would depend on the mixing parameter β) and then generate a sequence of tokens following that policy. Additionally, defining the normalization constant c as in Equation (5) for the full mixture πβ θ is computationally prohibitive given the large number of possible sequences; instead, we would like to proceed by generating a token at a time. The approach we follow in our experiments consists in generating a token yn from the marginal (geometric) mixture πβ θ ( |x, y0:n 1) defined by log πβ θ (yn|x, y0:n 1) = (1 β) log πθ(yn|x, y0:n 1) + β log µ(yn|x, y0:n 1) + C(x, y0:n 1), where the normalization constant C depends on x, y0:n 1. In order to sample from this marginal geometric mixture over the nth token, we evaluate the corresponding logits of both the current policy πθ and the reference policy µ (conditioned on (x, y0:n 1)), we compute their (β-arithmetic) mixture, and sample a next token yn from the corresponding softmax distribution. We call this corresponding product of marginal (geometric) mixtures over individual tokens the one-step-at-atime regularized policy πβ θ (y|x) def = n=0 πβ θ (yn|x, y0:n 1). Notice that the one-step-at-a-time regularized policy πβ θ (y|x) is different from the original regularized policy πβ θ (y|x) because the sequence of normalization constants C(x, y0:n 1) depend on the specific sample path y0:n 1 and does not necessarily correspond to the full normalization constant c defined in Equation (5). We leave the analysis of the difference between these two policies for future work. F.2. Computing the Nash equilibrium using regularized policy gradient Our general algorithm for computing the Nash equilibrium of the preference model consists in repeating these steps: We randomly select a prompt x ρ. We generate two responses y and y (in an autoregressive fashion in the case of LLMs): the first one y πθ( |x) by following the current policy πθ that is being optimized; the second one y π ( |x) by following an alternative policy π . The choice of the alternative policy π that we use for the second generated sample y depends on the specific algorithm we consider (the description of which is given in the next subsection). We update the parameter θ of the policy πθ in the direction of the gradient θPτ(πθ π ) of the regularized preference model Pτ. We consider two cases, depending on whether a preference model is learnt or not. P-model-based approach. If we have learnt a preference model P (see Section G.1 for example for how one can learn a preference model) we query it to get the preference reward P(y y |x) and update θ by moving it in the direction of the policy gradient estimate bg(x, y, y ) def = θ log πθ(y|x) P(y y |x) 1/2 τ log πθ(y|x) Nash Learning from Human Feedback Notice we have subtracted the baseline 1/2 = P(y y|x) from the preference P(y y |x) (which does not change the expectation of the gradient) as a variance reduction technique that does not require learning a value function as baseline. In practice, when the response y comprises a sequence of tokens y0:N, a sample-based estimator to the KL based on the sample response y can be used. Further, this can be decomposed into a sum across token indicies of per-token KL estimators, and the standard policy-gradient variance-reduction trick of only multiplying θ log πθ(yn|x, y0:n 1) by KL estimator terms corresponding to indices at least as great as n can be applied. P-model-free approach. In the case the preference model P(y y |x) comes directly from human preferences: P(y y |x) = PZ ν(Human Z prefers y over y given x), where ν is a distribution over humans, and if humans are immediately available to express their preference between any two responses, we can directly estimate the gradient by replacing P(y y |x) with I{Human Z prefers y over y given x} in Equation (13). This estimate does not require to learn a preference model first and is thus not affected by possible bias coming from an approximate model. Implementation-wise it requires having access to humans preference immediately after having generated the responses y and y . In both model-based and model-free approaches, we have that θPτ(πθ π ) = Ex ρ, n y πθ( |x) y π ( |x) [bg(x, y, y )] , (14) (where π denotes a stop-gradient on π in the case π would depend on θ). F.3. Choice of the alternative policy π Now, for the choice of alternative policies π that are used to generate the second sample y , we will consider two different algorithms Nash-MD-PG and Nash-EMA-PG, that are inspired by, respectively, the mirror-ascent algorithm Nash-MD introduced in the previous section, and a generalization of fictitious play where we consider an exponential moving average. Nash-MD-PG. We define the alternative policy π = πβ θ as a geometric-mixture between πθ and µ in a similar way as the regularized policy is defined in Equation (3): log πβ θ (y|x) def = (1 β) log(πθ(y|x)) + β log(µ(y|x)) + c(x), (15) where β [0, 1] is the parameter of the mixture, and c(x) is a constant independent of y. This is inspired by the Nash-MD algorithm described in Section 6, which we have proven to be convergent in Theorem 1. In the case of sequential generation of tokens in LLMs, we apply the one-step-at-a-time version πβ θ of this regularized policy πβ θ as defined in Subsection F.1. However, the corresponding PG version outlined in Subsection F.2 differs from Nash-MD as defined in Section 6 in a number of ways. In addition to using a parametric representation of policies instead of a tabular one, it differs from the fact that it is not directly implementing a mirror descent algorithm but a simple gradient descent on the regularized preference model. In a sense this algorithm is only making a gradient step for the inner optimization problem of Equation (4), whereas a more faithful variant of Nash-MD would use a two-time scale algorithm and perform several gradient steps (while keeping πθ and πβ θ fixed) until the inner loop has reached an optimum, before updating πθ and πβ θ . Another apparent difference is that Nash-MD uses a KL-regularization w.r.t. the mixture policy πβ θ , whereas Nash-MD-PG uses a KL w.r.t. the reference policy µ. However, we have that KL(πθ, πβ θ ) = (1 β)KL(πθ, πθ) + βKL(πθ, µ) Ex ρ[c(x)] = βKL(πθ, µ) Ex ρ[c(x)], where c(x) is the normalizing constant in Equation (15). Thus, we have θKL(πθ, πβ θ ) = β θKL(πθ, µ) , and since we perform a single step of gradient descent before updating πθ, regularizing with respect to the mixture πβ θ (in Nash-MD) is equivalent to regularizing w.r.t. µ (in Nash-MD-PG). Further, we use an additional parameter β (to define the mixture) that can be further tuned independently of τ. Thus, while it is possible to implement Nash-MD more faithfully, such as by incorporating two-timescale policy gradient versions or exploring variants of regularized policy gradient methods such as PPO (Schulman et al., 2017) or Neu RD Nash Learning from Human Feedback (Hennes et al., 2020), we contend that the essence of Nash-MD is encapsulated in Nash-MD-PG for the following reason: the policy gradient algorithm Equation (14) improves the current policy πθ by playing against the geometric mixture πβ θ while preserving regularization with respect to πβ θ . Extreme cases for β [0, 1]. Consider the alternative policy πβ θ of Nash-MD-PG when β [0, 1] takes its extreme possible values: β = 0 or 1. When β = 0 then πβ=0 θ = πθ, thus the alternative policy is the current policy, and this algorithm is simply a version of self-play (SP) where one improves its policy by playing against oneself. We do not expect this algorithm (even in its tabular form) to enjoy a last-iterate convergence to the Nash equilibrium; see the discussion around the OMD algorithm in Equation (8). Now, when β = 1, then the alternative policy is πβ=1 θ = µ, thus we are improving the current policy against the (fixed) reference policy µ (i.e., optimizing π 7 Pτ(π, µ)), thus this a version of best-response (BR) against µ. This will generally not converge to the Nash equilibrium either because there is no reason that this BR cannot be exploited. Nash-EMA-PG. As an alternative to Nash-MD-PG, we consider as alternative policy π another mixture policy π def = π θt where θt is a exponential moving average (EMA) of the past values of the parameter (θs)s t, defined (recursively) by θt = (1 β)θt +βθ0. Thus when β = 0 then π θt = πθt and the algorithm is just self-play, and when β = 1, then π θt = πθ0 and the algorithm is a best response again the fixed initial policy πθ0. Now for any other β (0, 1) the policy uses as parameter a mixture of past parameters. Because of the non-linearity of the policy representation, there is no guarantee that this policy is the mixture of the corresponding past policies. However, prior work in deep learning (Grill et al., 2020; Wortsman et al., 2022; Busbridge et al., 2023; Rame et al., 2023) suggests that it could be a reasonable first-order approximation to it. G. Experiments We now report experiments on a summarisation task and compare several algorithms for NLHF (self-play, best-response against µ, Nash-MD-PG and Nash-EMA-PG) as well as a RLHF baseline. G.1. Preference models versus reward models In this section, we compare parametric preference models Pθ and reward models rθ. Preference models assigns a score Pθ(y y |x) [0, 1] that can be interpreted as the probability of generation y being preferred to generation y given the context x. The preference Pθ(y y |x) is initialised by using a LLM prompted in the following way: You are an expert summary rater. Given a piece of text and two of its possible summaries, output 1 or 2 to indicate which summary is better. Text - text , Summary 1 - summary1 , Summary 2 - summary2 . Preferred Summary - where text corresponds to x, summary1 to y, and summary2 to y . We then use the last logit for an arbitrary chosen token and pass it through a sigmoid function to output a single number in [0, 1]. This number models the preference Pθ(y y |x). We train the LLM to fit the underlying human preference probability P(y y |x) by minimizing a cross-entropy loss on a dataset D = {(xk, yk w, yk l )1 k K}, where yk w is the preferred generation, yk l is the less preferred generation and K is the number of examples: LP(θ) = E(x,yw,yl) D [log (Pθ(yw yl|x))] . Reward models assigns a score rθ(x, y) R that can be interpreted as the value of a generation y given a context x. The reward rθ(y|x) is defined by prompting the LLM in the following way: Context - text , Summary - summary where text corresponds to x and summary to y. We then use the last logit for an arbitrary chosen token to output a single number. This number models the reward rθ(y|x). Reward models are trained to fit the underlying human preference probability P(y y |x) via a Bradley-Terry model PBT (y y |x) def = σ (rθ(x, y) rθ(x, y )) where σ(x) is the sigmoid function. They use the same preference dataset D and minimize the following cross-entropy loss: Lr(θ) = E(x,yw,yl) D [log (σ (rθ(yw|x) rθ(yl|x)))] . In our experiments, we use the summarization dataset described in (Stiennon et al., 2020) that has been built from the TL;DR dataset (V olske et al., 2017). We train our preference and reward models on the train set DTrain, that contains 92820 Nash Learning from Human Feedback examples, and evaluate them on a test set of high confidence data DTest. To measure the quality of our models we use the expected agreement, also called accuracy, between our models and the human ratings: A(Pθ) = E(x,yw,yl) D 1{Pθ(yw yl|x) 0.5} , A(rθ) = E(x,yw,yl) D 1{σ(rθ(yw|x) rθ(yl|x)) 0.5} . Our first experiment (see Figure 1) shows the accuracy of preference models with different sizes. Our models are T5X encoder-decoder models (transformer models) that have been described in detail in (Roberts et al., 2023; Roit et al., 2023). We use different sizes: T5X-small (110M), T5X-XL (3B) and T5X-XXL (11B). We see, on the test set, that the bigger the model the better the accuracy. However, there is relatively small gains going from 3B to 11B in this specific summarization task. In the remaining, we therefore run our experiments on T5X-XL models only. Figure 1. Learning curves showing the accuracy of preference models of different sizes on the train set (left) and on the test set (right). Our second experiment consists in looking at the accuracy of T5X-XL reward model versus the accuracy of a T5X-XL preference model. We observe that the preference model has a slightly better accuracy than the reward model on the test set (peak accuracy for the preference model is around 0.78 vs 0.76 for the reward model). Figure 2. Learning curves showing the accuracy of a preference model versus the accuracy of a reward model of the same size on the train set (left) and on the test set (right). G.2. Supervised fine-tuned (SFT) initial policy In all our experiments, we will initialize our policy with a T5X-L model and fine-tune it by supervised learning using the Open AI dataset described in (Stiennon et al., 2020) that was built from the TL;DR dataset (V olske et al., 2017). We call this supervised fine-tuned model the SFT. In all our experiments, our policies are initialized with this SFT. For all our policy models, we opted for a T5X-L model, as opposed to T5X-XL, for computational efficiency and to compute the pairwise comparisons across our policies. The primary objective of these experiments is to provide a proof of concept Nash Learning from Human Feedback for the NLHF approach introduced in this paper, rather than striving for state-of-the-art performance in text summarization. Therefore, our aim is to conduct a fair and equitable comparison among the various approaches. G.3. RLHF baseline We established a RLHF baseline by initializing our model with the SFT and then updating the policy by doing 10000 steps of a regularized policy gradient update: Ex ρ,y πθ( |x) [ θ log πθ(y|x) (R(x, y) τKL(πθ( |x), µ( |x)))] , (16) where the reward R(x, y) comes from the trained T5X-XL reward model, as described in Subsection G.1. We conducted a sweep across a set of values 0.01, 0.02, 0.05, 0.1, 0.2 for the parameter τ of the KL-regularization. The value τ = 0.05 has been selected for the pairwise comparison table below. G.4. NLHF algorithms Nash-MD and Nash-EMA We initialize our policy with the SFT and update the model by executing the Nash-MD-PG and Nash-EMA-PG algorithms as outlined in Section F. The preference model P used in these algorithms is derived from the trained T5X-XL model, as described in Subsection G.1. We conducted a sweep over the values τ {0.02, 0.01, 0.008, 0.005} and selected τ = 0.008 for all Nash-MD and Nash-EMA experiments for the pairwise comparison table below. For Nash-MD-PG we conducted a sweep over the mixing coefficient β {0, 0.125, 0.250, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0} (used in the definition of the alternative policy defined in Section F.3) and for Nash-EMA-PG we have swept over β {0, 0.999, 0.9995, 0.9999, 1.0}. G.5. Pairwise preference between all the models Here are the list of all the models we considered for pairwise preference comparison. SFT: Supervised-fined-tuned, described in Subsection G.2. All models all initialised with this SFT and this SFT is also the policy µ we use for the KL-regularization. RLHF described in Subsection G.3 with regularization coefficient τ = 0.05. SP (self-play). This corresponds to Nash-MD-PG with mixture coefficient β = 0 (or equivalently Nash-EMA-PG with β = 0 as both algorithms are equivalent for β = 0), described in Subsection G.4. The policy improves by playing against itself (the alternative policy π = πθ is the current policy). MD1 to MD6 is Nash-MD-PG with β {0.125, 0.25, 0.375, 0.5, 0.625, 0.75}. BR is best-response against SFT. This corresponds to Nash-MD-PG with β = 1 (or equivalently Nash-EMA-PG with β = 1). The policy improves by playing against the fixed SFT policy. EMA1 and EMA2 are the last-iterate of Nash-EMA-PG (i.e., returns the last policy), with β {0.999, 0.9995}. EMA1* and EMA* are the EMA policy of Nash-EMA-PG (i.e., returns the policy with average weight) with β {0.999, 0.9995}. All models are trained for 10000 steps. The Nash-MD models (as well as SP and BR) and Nash-EMA are trained with a regularization coefficient of τ = 0.008. The pairwise preference comparisons under Pτ are given in Table 2; these figures are estimated based on 1,000 pairwise comparisons, and hence an upper bound on the width of a 95% confidence interval for each is 0.032, based on the exact Clopper-Pearson method for Bernoulli proportions (Clopper & Pearson, 1934). Note that the Clopper-Pearson method can be used to deduce a per-element confidence interval which may be considerably narrower in cases where the empirically observed preference rate is close to 0 or 1. We will analyse these results after the next section where we describe an evaluation of our models based on a preference model build from a much larger LLM. Nash Learning from Human Feedback Table 2. The regularized preference Pτ(πc πr) between column policy πc against row policy πr Pτ SFT RLHF SP MD1 MD2 MD3 MD4 MD5 MD6 BR EMA1 EMA2 EMA1* EMA2* SFT 0.500 0.975 0.981 0.986 0.983 0.982 0.979 0.970 0.967 0.933 0.965 0.970 0.971 0.975 RLHF 0.025 0.500 0.741 0.769 0.752 0.744 0.661 0.450 0.340 0.167 0.640 0.531 0.617 0.671 SP 0.019 0.259 0.500 0.547 0.506 0.509 0.406 0.244 0.185 0.082 0.418 0.338 0.363 0.450 MD1 0.014 0.231 0.453 0.500 0.471 0.469 0.354 0.224 0.165 0.079 0.372 0.308 0.348 0.409 MD2 0.017 0.248 0.494 0.529 0.500 0.492 0.393 0.231 0.182 0.084 0.426 0.315 0.375 0.454 MD3 0.018 0.256 0.491 0.531 0.508 0.500 0.380 0.230 0.153 0.087 0.411 0.328 0.349 0.457 MD4 0.021 0.339 0.594 0.646 0.607 0.620 0.500 0.306 0.224 0.088 0.508 0.416 0.458 0.531 MD5 0.030 0.550 0.756 0.776 0.769 0.770 0.694 0.500 0.380 0.169 0.682 0.554 0.627 0.697 MD6 0.033 0.660 0.815 0.835 0.818 0.847 0.776 0.620 0.500 0.269 0.735 0.644 0.706 0.777 BR 0.067 0.833 0.918 0.921 0.916 0.913 0.912 0.831 0.731 0.500 0.856 0.789 0.830 0.875 EMA1 0.035 0.360 0.582 0.628 0.574 0.589 0.492 0.318 0.265 0.144 0.500 0.407 0.448 0.507 EMA2 0.030 0.469 0.662 0.692 0.685 0.672 0.584 0.446 0.356 0.211 0.593 0.500 0.540 0.627 EMA1* 0.029 0.383 0.637 0.652 0.625 0.651 0.542 0.373 0.294 0.170 0.552 0.460 0.500 0.589 EMA2* 0.025 0.329 0.550 0.591 0.546 0.543 0.469 0.303 0.223 0.125 0.493 0.373 0.411 0.500 G.6. Evaluation using the Pa LM 2 preference model While the ideal approach for evaluating our models would involve soliciting human preferences between summaries generated by different models, we resort to a proxy method using the highly capable LLM, Pa LM 2 Large (Anil et al., 2023). We query this model to obtain a preference signal, which we refer to as the Pa LM 2 preference model P (y y |x), achieved by prompting the LLM in the following manner: You are an expert summary rater. Given a piece of text and two of its possible summaries, output 1 or 2 to indicate which summary is better. Text - text , Summary 1 - summary1 , Summary 2 - summary2 . Preferred Summary - , where text corresponds to x, summary1 to y, and summary2 to y . This evaluation approach shares similarities with the method employed by Lee et al. (2023). To obtain an assessment of the preference P (π π ), we compute the ratio between the total number of token 1 generated and the total number of token 1 or 2 across 2000 samples drawn from the distribution (x ρ, y π( |x), y π ( |x)). This P serves as an approximate surrogate for human preferences. Notably, it is essential to highlight that the preference model P utilized during the training of our policies is considerably smaller in size than P and corresponds to a different model. Specifically, P is based on the T5X-XL model, fine-tuned with TL;DR data, whereas P is derived from the Pa LM 2 Large model. The pairwise preference comparisons under P using the Pa LM 2 Large model are given in Table 3. As each element is estimated with 2000 samples, the confidence interval, an upper bound on the 95% confidence interval is given by 0.023, based on the exact Clopper-Pearson method for Bernoulli proportions (Clopper & Pearson, 1934). Table 3. Pa LM 2 preference P (πc πr) model between column policy πc against row policy πr. P SFT RLHF SP MD1 MD2 MD3 MD4 MD5 MD6 BR EMA1 EMA2 EMA1* EMA2* SFT 0.500 0.990 0.983 0.982 0.989 0.987 0.985 0.982 0.965 0.943 0.970 0.961 0.977 0.980 RLHF 0.010 0.500 0.489 0.598 0.519 0.561 0.501 0.436 0.284 0.148 0.468 0.320 0.477 0.510 SP 0.017 0.511 0.500 0.592 0.504 0.545 0.499 0.451 0.310 0.211 0.445 0.362 0.464 0.488 MD1 0.018 0.402 0.408 0.500 0.425 0.470 0.369 0.362 0.238 0.163 0.391 0.270 0.400 0.447 MD2 0.011 0.481 0.496 0.575 0.500 0.513 0.491 0.434 0.298 0.196 0.460 0.351 0.430 0.496 MD3 0.013 0.439 0.455 0.530 0.487 0.500 0.484 0.408 0.273 0.187 0.429 0.323 0.413 0.472 MD4 0.015 0.499 0.501 0.631 0.509 0.516 0.500 0.428 0.265 0.161 0.468 0.358 0.437 0.503 MD5 0.018 0.564 0.549 0.638 0.566 0.592 0.572 0.500 0.329 0.210 0.532 0.389 0.518 0.539 MD6 0.035 0.716 0.690 0.762 0.702 0.727 0.735 0.671 0.500 0.342 0.652 0.548 0.651 0.691 BR 0.057 0.852 0.789 0.837 0.804 0.813 0.839 0.790 0.658 0.500 0.743 0.640 0.752 0.774 EMA1 0.030 0.532 0.555 0.609 0.540 0.571 0.532 0.468 0.348 0.257 0.500 0.381 0.480 0.556 EMA2 0.039 0.680 0.638 0.730 0.649 0.677 0.642 0.611 0.452 0.360 0.619 0.500 0.585 0.659 EMA1* 0.023 0.523 0.536 0.600 0.570 0.587 0.563 0.482 0.349 0.248 0.520 0.415 0.500 0.555 EMA2* 0.020 0.490 0.512 0.553 0.504 0.528 0.497 0.461 0.309 0.226 0.444 0.341 0.445 0.500 Nash Learning from Human Feedback G.7. Analysis of the results First, let us mention that the RLHF baseline that we have built is a very strong baseline. It beats SFT with a win rate of 99% marking the highest win rate observed against SFT among all models when using the Pa LM 2 preference model P Best-response against self-play (BR) does not exhibit strong performance. Despite being trained explicitly to outperform self-play during training, its P -evaluation yields a relatively modest score of 94% against self-play. Furthermore, BR performs poorly against RLHF and all other Nash-based approaches. This suggests the possibility of preference hacking, where BR may be overly adapting to the preference model by overfitting to the specific SFT policy. Self-play (SP) exhibits strong overall performance, with notable exceptions in the P evaluation against RLHF and the Nash-MD models (for β 0.5). This suggests that enhancing one s policy through self-play could be a promising avenue for improving the initial model. However, it s essential to acknowledge that self-play does not guarantee the attainment of a Nash equilibrium, as cyclic patterns are possible, as discussed in the Theory Section. In particular, SP is found to be vulnerable to exploitation by certain Nash-MD models. The Nash-MD models, especially those with β 0.5, exhibit very strong performance. Notably, Nash-MD models with β = 0.125, β = 0.25, and β = 0.375 outperform all other models, including RLHF. Among them, Nash-MD with β = 0.125 (highlighted in bold as MD1 ) emerges as the top-performing model, surpassing all others in both the training preference model Pτ and the evaluation model P . All Nash-EMA models, including EMA1 and EMA2 (representing the last iterate) as well as EMA1* and EMA2* (representing the average policy), are outperformed by Nash-MD for β 0.5 and RLHF. This observation may suggest that the first-order approximation of the mixture policy as the policy having an average (EMA) weight may not be well-suited in this context, potentially contributing to the overall lower performance. Examining Nash-MD, which emerges as the most efficient method, it is interesting to note that both extreme values of the mixing parameter β [0, 1], namely β = 0 (self-play) and β = 1 (best-response against SFT), result in suboptimal performance compared to intermediate values of β (particularly β = 0.125, β = 0.25, and β = 0.375). This trend is visible, for instance, in the highlighted blue row showing Nash-MD (for β 0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 1.0) against RLHF. It suggests that improving one s policy by playing against a mixture of the initial policy and the current policy yields superior model improvement compared to interactions with either the initial policy or the current policy in isolation.