# the_samplecommunication_complexity_tradeoff_in_federated_qlearning__9483f677.pdf The Sample-Communication Complexity Trade-off in Federated Q-Learning Sudeep Salgia Carnegie Mellon University ssalgia@andrew.cmu.edu Yuejie Chi Carnegie Mellon University yuejiechi@cmu.edu We consider the problem of Federated Q-learning, where M agents aim to collaboratively learn the optimal Q-function of an unknown infinite horizon Markov Decision Process with finite state and action spaces. We investigate the trade-off between sample and communication complexity for the widely used class of intermittent communication algorithms. We first establish the converse result, where we show that any Federated Q-learning that offers a linear speedup with respect to number of agents in sample complexity needs to incur a communication cost of at least Ω( 1 1 γ ), where γ is the discount factor. We also propose a new Federated Q-learning algorithm, called Fed-DVR-Q, which is the first Federated Q-learning algorithm to simultaneously achieve order-optimal sample and communication complexities. Thus, together these results provide a complete characterization of the sample-communication complexity trade-off in Federated Q-learning. 1 Introduction Reinforcement Learning (RL) [Sutton and Barton, 2018] refers to an online sequential decision making paradigm where the learning agent aims to learn an optimal policy, i.e., a policy that maximizes the long-term reward, through repeated interactions with an unknown environment. RL finds applications across a diverse array of fields including, but not limited to, autonomous driving, games, recommendation systems, robotics and Internet of Things (Io T) [Kober et al., 2013, Yurtsever et al., 2020, Silver et al., 2016, Lim et al., 2020]. The primary hurdle in RL applications is often the high-dimensional nature of the decision space that necessitates the learning agent to have to access to an enormous amount of data in order to have any hope of learning the optimal policy. Moreover, the sequential collection of such an enormous amount of data through a single agent is extremely time-consuming and often infeasible in practice. Consequently, practical implementations of RL involve deploying multiple agents to collect data in parallel. This decentralized approach to data collection has fueled the design and development of distributed or federated RL algorithms that can collaboratively learn the optimal policy without actually transferring the collected data to a centralized server. Such a federated approach to RL, which does not require the transfer of local data, is gaining interest due to lower bandwidth requirements and lower security and privacy risks. In this work, we focus on federated variants of Q-learning algorithms where the agents collaborate to directly learn the optimal Q-function without forming an estimate of the underlying unknown environment. A particularly important aspect of designing Federated RL algorithms, including Federated Q-learning algorithms, is to address the natural tension between sample and communication complexity. At one end of the spectrum lies the naïve approach of running a centralized algorithm with optimal sample complexity after transferring and combining all the collected data at a central facility/server. Such an approach trivially achieves the optimal sample complexity while suffering from a very high and infeasible communication complexity. On the other hand, several recently proposed 38th Conference on Neural Information Processing Systems (Neur IPS 2024). algorithms [Khodadadian et al., 2022, Woo et al., 2023] operate in more practical regimes, offering significantly lower communication complexities as compared to the naïve approach at the cost of sub-optimal sample complexities. These results suggest the existence of underlying trade-off between sample and communication complexities of Federated RL algorithms. The primary goal of this work is to better understand this trade-off in context of Federated Q-learning by investigating these following fundamental questions: Fundamental limit of communication: What is the minimum amount of communication required by a federated Q-learning algorithm to achieve any statistical benefit of collaboration? Optimal algorithm design: How does one design a federated Q-Learning algorithm that simultaneously offers optimal order sample and communication complexity guarantees i.e., operates on the optimal frontier of sample-communication complexity trade-off? 1.1 Main Results We consider a setup where M distributed agents collaborate to learn the optimal Q-function of an infinite horizon Markov Decision Process which is defined over a finite state space S and a finite action set A, and has a discount factor of γ (0, 1). We consider a commonly considered setup in federated learning called the intermittent communication setting, where the clients intermittently share information among themselves with the help of a central server. In this work, we provide a complete characterization of the trade-off between sample and communication complexity under the aforementioned setting by providing answers to both the questions. The main result of this work is twofold and is summarized below. Fundamental bounds on communication complexity of Federated Q-learning: We establish lower bounds on the communication complexity of Federated Q-learning, both in terms of number of communication rounds and the overall number of bits that need to be transmitted in order to achieve any speed up in convergence with respect to the number of agents. Specifically, we show that in order for an intermittent communication algorithm to obtain any benefit of collaboration, i.e., any order of speed up w.r.t. the number of agents, the number of communication rounds must be least Ω( 1 (1 γ) log2 N ) and the number of bits sent by each agent to the server must be least Ω( |S||A| (1 γ) log2 N ), where N denotes the number of samples taken by the algorithm for each state-action pair. Achieving the optimal sample-communication complexity trade-off: We propose a new Federated Q-Learning algorithm called Federated Doubly Variance Reduced Q Learning, Fed-DVR-Q for short, that simultaneously achieves optimal order of sample complexity and the minimal order of communication as dictated by the lower bound. We show that Fed-DVR-Q learns an ε-optimal Q-function in the ℓ sense with O |S||A| Mε2(1 γ)3 i.i.d. samples from the generative model at each agent while incurring a total communication cost of O |S||A| (1 γ) bits per agent across O 1 (1 γ) rounds of communication. Thus, Fed-DVR-Q not only improves upon both the sample and communication complexities of existing algorithms, but also is the first algorithm to achieve both order-optimal sample and communication complexities (See Table 1 for a comparison). 1.2 Related Work Single agent Q-Learning. Q-Learning has been extensively studied in the single-agent setting in terms of both its asymptotic convergence [Jaakkola et al., 1993, Tsitsiklis, 1994, Szepesvári, 1997, Borkar and Meyn, 2000] and its finite-time sample complexity in both synchronous [Even-Dar and Mansour, 2004, Beck and Srikant, 2012, Wainwright, 2019a, Chen et al., 2020, Li et al., 2023] and asynchronous settings [Chen et al., 2021b, Li et al., 2023, 2021, Qu and Wierman, 2020]. Distributed RL. There has also been a considerable effort towards developing distributed and federated RL algorithms. The distributed variants of the classical TD learning algorithm have been investigated in a series of studies [Chen et al., 2021c, Doan et al., 2019, 2021, Sun et al., 2020, Wai, 2020, Wang et al., 2020, Zeng et al., 2021b]. The impact of environmental heterogeneity in federated TD learning was studied in Wang et al. [2023]. A distributed version of actor-critic Algorithm/Reference Number of Agents Sample Complexity Communication Complexity Q-learning [Li et al., 2023] 1 |S||A| (1 γ)4ε2 N/A Variance Reduced Q-learning [Wainwright, 2019b] 1 |S||A| (1 γ)3ε2 N/A Fed-Syn Q [Woo et al., 2023] M |S||A| M(1 γ)5ε2 Fed-DVR-Q (This work) M |S||A| M(1 γ)3ε2 Lower bound ([Azar et al., 2013], This work) M |S||A| M(1 γ)3ε2 Table 1: Comparison of sample and communication complexity of various single-agent and Federated Q-learning algorithms for learning an ε-optimal Q-function under the synchronous setting. We hide logarithmic factors and burn-in costs for all results for simplicity of presentation. In the above table, S and A represent state and action spaces respectively and γ denotes the discount factor. We report the communication complexity only in terms of number of rounds as other algorithms assume transmission of real numbers and hence do not report bit level costs. For the lower bound, Azar et al. [2013] and this work establish the bound for sample and communication complexity respectively. algorithms was studied by Shen et al. [2023] where the authors established convergence of their algorithm and demonstrated a linear speed up in the number of agents in their sample complexity bound. Chen et al. [2022] proposed a new distributed actor-critic algorithm which improved the dependence of sample complexity on the error ε and incurs a communication cost of O(ε 1). Chen et al. [2021a] have proposed a communication efficient distributed policy gradient algorithm and have analyzed its convergence and established a communication complexity of O(1/(Mε)). Xie and Song [2023] adopts a distributed policy optimization perspective, which is different from the Q-learning paradigm considered in this work. Moreover, the algorithm in Xie and Song [2023] obtains a linear communication cost, which is worse than that obtained in our work. Similarly, Zhang et al. [2024] focuses on on-policy learning and incurs a communication cost that depends polynomially on the required error ε. Several other studies [Yang et al., 2023, Zeng et al., 2021a, Lan et al., 2024] have also developed and analyzed other distributed/federated variants of the classical natural policy gradient method [Kakade, 2001]. Assran et al. [2019], Espeholt et al. [2018], Mnih et al. [2016] have developed distributed algorithms to train deep RL networks more efficiently. Distributed Q-learning. Federated Q-learning has been explored relatively recently. Khodadadian et al. [2022] proposed and analyzed a federated Q-learning algorithm in the asynchronous setting with a sample complexity of O |S|2 Mµ5 min(1 γ)9ε2 , where µmin is the minimum entry of the stationary state-action occupancy distribution of the sample trajectories over all agents. Jin et al. [2022] study the impact of environmental heterogeneity across clients in Federated Q-learning. They propose an algorithm where the local environments are different at each client but each client knows their local environment. Under this setting, they propose an algorithm that achieves a sample and communication complexity of O( 1 (1 γ)3ε) and O( 1 (1 γ)3ε) rounds respectively. Woo et al. [2023] proposed new algorithms with improved analysis for Federated Q-learning under both synchronous and asynchronous settings. Their proposed algorithm achieves a sample complexity and communication complexity of O( |S||A| M(1 γ)5ε2 ) and O( M|S||A| 1 γ ) real numbers respectively under the synchronous setting and that of O( 1 Mµavg(1 γ)5ε2 ) and O M|S||A| 1 γ real numbers respectively under the asynchronous setting. Here, µavg denotes the minimum entry of the average stationary state-action occupancy distribution of all agents. In a follow up work, Woo et al. [2024] propose a Federated Qlearning for offline RL in finite horizon setting and establish a sample and communication complexity of O( H7|S|Cavg Mε2 ) and O(H), where H denotes the length of the horizon and Cavg denotes the average single-policy concentrability coefficient of all agents. Accuracy-Communication Trade-off in Federated Learning. The trade-off between communication complexity and accuracy (equivalently, sample complexity) has been studied in various federated and distributed learning problems, including stochastic approximation algorithms for convex optimization. Duchi et al. [2014], Braverman et al. [2016] establish the celebrated inverse linear relationship between the error and the communication cost the problem of distributed mean estimation. Similar trade-off for distributed stochastic optimization, multi-armed bandits and linear bandits has been studied and established across numerous studies [Woodworth et al., 2018, 2021, Tsitsiklis and Luo, 1987, Shi and Shen, 2021, Salgia and Zhao, 2023]. 2 Problem Formulation and Preliminaries In this section, we provide a brief background of Markov Decision Processes, outline the performance measures for Federated Q-learning algorithms and describe the class of intermittent communication algorithms considered in this work. 2.1 Markov Decision Processes In this work, we focus on an infinite-horizon Markov Decision Process (MDP), denoted by M, over a state space S and an action space A and with a discount factor γ (0, 1). Both the state and action spaces are assumed to be finite sets. In an MDP, the state s evolves dynamically under the influence of actions based on a probability transition kernel, P : (S A) S [0, 1]. The entry P(s |s, a) denotes the probability of moving to state s when an action a is taken in the state s. An MDP is also associated with a deterministic reward function r : S A [0, 1], where r(s, a) denotes the immediate reward obtained for taking the action a in the state s. Thus, the transition kernel P along with the reward function r completely characterize an MDP. In this work, we consider the synchronous setting, where each agent has access to an independent generative model or simulator from which they can draw independent samples from the unknown underlying distribution P( |s, a) for each state-action pair (s, a) [Kearns and Singh, 1998]. A policy π : S (A) is a rule for selecting actions across different states, where (A) denotes the simplex over A and π(a|s) denotes the probability of choosing action a in a state s. Each policy π is associated with a state value function and a state-action value function, or the Q-function, denoted by V π and Qπ respectively. V π and Qπ measure the expected discounted cumulative reward attained by π starting from a particular state s and state-action pair (s, a) respectively. Mathematically, V π and Qπ are given as V π(s) := E t=0 γtr(st, at) s0 = s ; Qπ(s, a) := E t=0 γtr(st, at) s0 = s, a0 = a where at π( |st) and st+1 P( |st, at) for all t 0. The expectation is taken w.r.t. the randomness in the trajectory {st, at} t=1. Since the rewards lie in [0, 1], it follows immediately that both the value function and Q-function lie in the range [0, 1 1 γ ]. An optimal policy π is a policy that maximizes the value function uniformly over all the states and it has been shown that such an optimal policy π always exists [Puterman, 2014]. The optimal value and Q-functions are those corresponding to that of an optimal policy π are denoted as V := V π and Q := Qπ respectively. The optimal Q-function, Q , is also the unique fixed point of the Bellman operator T : S A S A, given by (T Q)(s, a) = r(s, a) + γ Es P ( |s,a) max a A Q(s , a ) . (2) Q-learning [Watkins and Dayan, 1992] aims to learn the optimal policy by first learning Q as the solution to the fixed point equation T Q = Q and then obtain a deterministic optimal policy via the maximization π (s) = arg maxa Q (s, a). Let Z S|S||A| be a random vector whose (s, a)th coordinate is drawn from the distribution P( |s, a), independently of all other coordinates. We define the random operator TZ : (S A) (S A) as (TZQ)(s, a) = r(s, a) + γV (Z(s, a)), (3) where V (s ) = maxa A Q(s , a ). The operator TZ can be interpreted as the sample Bellman Operator, where we have the relation T Q = EZ[TZQ] for all Q-functions. Lastly, the federated learning setup considered in this work consists of M agents, where all the agents face a common, unknown MDP, i.e., the transition kernel and the reward functions are the same across agents, which is popularly known as the homogeneous setting. For a given value of ε (0, 1 1 γ ), the objective of agents is to collaboratively learn an ε-optimal estimate (in the ℓ sense) of the optimal Q-function of the unknown MDP. 2.2 Performance Measures We measure the performance of a Federated Q-learning algorithm A using two metrics sample complexity and communication complexity. For a given MDP M, let b QM(A , N, M) denote the estimate of Q M, the optimal Q-function of the MDP M, returned by an algorithm A , when given access to N i.i.d. samples from the generative model for each (s, a) pair at all the M agents. The minimax error rate of the algorithm A , denoted by ER(A ; N, M), is defined as ER(A ; N, M) := sup M=(P,r) E h b QM(A , N, M) Q M i , (4) where the expectation is taken over the samples and any randomness in the algorithm. Given a value of ε > 0, the sample complexity of A , denoted by SC(A ; ε, M) is given as SC(A ; ε, M) := |S||A| min{N N : ER(A ; N, M) ε}. (5) Similarly, we can also define a high-probability version for any δ (0, 1) as follows: SC(A ; ε, M, δ) := |S||A| min{N N : Pr(sup M b QM(A , N, M) Q M ε) 1 δ}. We measure the communication complexity of any federated learning algorithm both in terms of frequency of information exchange and total number of bits uploaded by the agents. For each agent m, let Cm round(A ; N) and Cm bit(A ; N) respectively denote the number of times agent m sends a message to the server and the total number of bits uploaded by agent m to the server when an algorithm A is run with N i.i.d. samples from the generative model for each (s, a) pair at each agent. The communication complexity of A , when measured in terms of frequency of communication and total number of bits exchanged, is given by CCround(A ; N) := 1 m=1 Cm round(A ; N); CCbit(A ; N) := 1 m=1 Cm bit(A ; N), (6) respectively. Similarly, for a given value of ε (0, 1 1 γ ), we can also define CCround(A ; ε) and CCbit(A ; ε) based on when A is run to guarantee a minimax error of at most ε. 2.3 Intermittent Communication Algorithms Algorithm 1: A generic algorithm A 1: Input : T, R, {ηt}T t=1, C = {tr}R r=1, B 2: Set Qm 0 0 for all agents m 3: for t = 1, 2, . . . , T do 4: for m = 1, 2, . . . , M do 5: Compute Qm t 1 2 according to Eqn. 7 6: Compute Qm t according to Eqn. 8 7: end for 8: end for 9: return QT In this work, we consider a popular class of federated learning algorithms referred to as algorithms with intermittent communication. The intermittent communication setting provides a natural framework to extend single agent Qlearning algorithms to the distributed setting. As the name suggests, under this setting, the agents intermittently communicate with each other, sharing their updated beliefs about Q . Between two communication rounds, each agent updates their belief about Q using stochastic fixed point iteration based on the locally available data, similar to a single agent setup. Such intermittent communication algorithms have been extensively studied and used to establish lower bounds on communication complexity of distributed stochastic convex optimization [Woodworth et al., 2018, 2021]. A generic Federated Q-learning algorithm with intermittent communication is outlined in Algorithm 1. It is characterized by the following five parameters: (i) total number of updates T; (ii) the number of communication rounds R; (iii) a step size schedule {ηt}T t=1; (iv) a communication schedule {tr}R r=1; (v) batch size B. During the tth iteration, each agent m computes {b TZb(Qm t 1)}B b=1, a minibatch of sample Bellman operators at the current estimate Qm t 1, using B samples from the generative model for each (s, a) pair, and obtains an intermediate local estimate using the Q-learning update as follows: 2 = (1 ηt)Qm t 1 + ηt b=1 TZb(Qm t 1). (7) Here ηt (0, 1] is the step-size chosen corresponding to the tth time step. The intermediate estimates are averaged based on a communication schedule C = {tr}R r=1 consisting of R rounds, i.e., M PM j=1 Qj t 1 2 otherwise. (8) In the above equation, the averaging step can also be replaced with any distributed mean estimation routine that includes compression to control the bit level costs. Without loss of generality, we assume that Qm 0 = 0 for all agents m and t R = T, i.e., the last iterates are always averaged. It is straightforward to note that the number of samples taken by an intermittent communication algorithm is BT, i.e, N = BT and the communication complexity CCround = R. 3 Lower Bound In this section, we investigate the first of the two questions regarding the lower bound on communication complexity. The following theorem establishes a lower bound on the communication complexity of a Federated Q-learning algorithm with intermittent communication. Theorem 1. Assume that γ [5/6, 1) and the state and action spaces satisfy |S| 4 and |A| 2. Let A be a Federated Q-learning algorithm with intermittent communication that is run for T max{16, 1 1 γ } steps with a step size schedule of either ηt := 1 1+cη(1 γ)t or ηt := η for all 1 t T. If R = CCround(A ; N) c0 (1 γ) log2 N ; or CCbit(A ; N) c1|S||A| (1 γ) log2 N for some universal constants c0, c1 > 0 then, for all choices of communication schedule, batch size B, cη > 0 and η (0, 1), the minimax error of A satisfies ER(A ; N, M) Cγ log3 N for all M 2 and N = BT. Here Cγ > 0 is a constant that depends only on γ. The above theorem states that in order for an intermittent communication algorithm to obtain any benefit of collaboration, i.e., for the error rate ER(A ; N, M) to decrease w.r.t. number of agents, the number of communication rounds must be least Ω( 1 (1 γ) log2 N ). This implies that any Federated Q-learning algorithm that offers order optimal sample complexity, and thereby also a linear speed up with respect to the number of agents, must have at least Ω( 1 (1 γ) log2 N ) rounds of communication and transmit Ω( |S||A| (1 γ) log2 N ) bits of information per agent. This characterizes the converse relation for the sample-communication tradeoff in Federated Q-learning. We would like to point out that our lower bound extends to the asynchronous setting as the assumption of i.i.d. noise corresponding to a generative model is a special case of Markovian noise observed in asynchronous setting. The lower bound on the communication complexity of Federated Q-learning is a consequence of the bias-variance trade-off that governs the convergence of the algorithm. While a careful choice of step-sizes alone is sufficient to balance this trade-off in the centralized setting, the choice of communication schedule also plays an important role in balancing this trade-off in the federated setting. The local steps between two communication rounds induce a positive estimation bias that depends on the standard deviation of the iterates and is a well-documented issue of over-estimation in Q-learning [Hasselt, 2010]. Since such a bias is driven by local updates, it does not reflect any benefit of collaboration. During a communication round, the averaging of iterates across agents allows the algorithm an opportunity to counter this bias by reducing the effective variance of the updates through averaging. In our analysis, we show that if the communication is infrequent, the local bias becomes the dominant term and averaging of iterates is insufficient to counter the impact of the positive bias induced by the local steps. As a result, we do not observe any statistical gains when the communication is infrequent. The analysis is inspired the analysis of Q-learning by Li et al. [2023] and is based on analyzing the convergence of an intermittent communication algorithm on a specifically chosen hard instance of MDP. Please refer to Appendix B for a detailed proof. Remark 1 (Communication complexity of policy evaluation). Several recent studies [Liu and Olshevsky, 2023, Tian et al., 2024] established that a single round of communication is sufficient to achieve linear speedup of TD learning for policy evaluation, which do not contradict with our results focusing on Q-learning for policy learning. The latter is more involved due to the nonlinearity of the Bellman optimality operator. Specifically, if the operator whose fixed point is to be found is linear in the decision variable (e.g., the value function in TD learning) then the fixed point update only induces a variance term corresponding to the noise. However, if the operator is non-linear, then in addition to the variance term, we also obtain a bias term in the fixed point update. While the variance term can be controlled with one-shot averaging, more frequent communication is necessary to ensure that the bias term is small enough. Remark 2 (Extension to asynchronous Q-learning). We would like to point out that our lower bound extends to the asynchronous setting [Li et al., 2023] as the assumption of i.i.d. noise corresponding to a generative model is a special case of Markovian noise observed in the asynchronous setting. 4 The Fed-DVR-Q algorithm Having characterized the lower bound on the communication complexity of Federated Q-learning, we explore our second question of interest designing a federated Q-learning algorithm that achieves this lower bound while simultaneously offering an optimal order of sample complexity. We propose a new Federated Q-learning algorithm, Fed-DVR-Q, that achieves not only a communication complexity of CCround = O( 1 1 γ ) and CCbit = O( |S||A| 1 γ ) but also the optimal order of sample complexity (upto logarithmic factors), thereby providing a tight characterization of the achievability frontier that matches with the converse result derived in the previous section. 4.1 Algorithm Description Algorithm 2: Fed-DVR-Q 1: Input : Error bound ε > 0, failure probability δ > 0 2: k 1, Q(0) 0 3: // Set parameters as described in Sec. 4.1.3 4: for k = 1, 2, . . . , K do 5: Q(k) REFINEESTIMATE(Q(k 1), B, I, Lk, Dk, J) 6: k k + 1 7: end for 8: return Q(K) Fed-DVR-Q proceeds in epochs. During an epoch k 1, the agents collaboratively update Q(k 1), the estimate of Q obtained at the end of previous epoch, to a new estimate Q(k), with the aid of the sub-routine called REFINEESTIMATE. The sub-routine REFINEESTIMATE is designed to ensure that the suboptimality gap, Q(k) Q , reduces by a factor of 2 at the end of every epoch. Thus, at the end of K = O(log(1/ε)) epochs, Fed-DVR-Q obtains a ε-optimal estimate of Q , which is then set to be the output of the algorithm. Please refer to Alg. 2 for a pseudocode. 4.1.1 The REFINEESTIMATE sub-routine REFINEESTIMATE, starting from Q, an initial estimate of Q , uses variance reduced Q-learning updates to obtain an improved estimate of Q . It is characterized by four parameters the initial estimate Q, the number of local iterations I, the recentering sample size L and the batch size B, which can be appropriately tuned to control the quality of the returned estimate. Additionally, it also takes input two parameters D and J required by the compressor. The first step in REFINEESTIMATE is to collaboratively approximate T Q for the variance reduced updates. To this effect, each agent m builds an approximation of T Q as follows: e T (m) L (Q) := 1 L/M l=1 TZ(m) l (Q), (9) where {Z(m) 1 , Z(m) 2 , . . . , Z(m) L/M } are L/M i.i.d. samples with Z(m) 1 Z. Each agent sends C e T (m) L (Q) Q; D, J , a compressed version of the difference e T (m) L (Q) Q, to the server, which collects all the estimates from the agents and constructs the estimate e TL(Q) = Q + 1 m=1 C e T (m) L (Q) Q; D, J (10) and sends it back to the agents for the variance reduced updates. We defer the description of the compression routine to the end of this section. Equipped with the estimate e TL(Q), REFINEESTIMATE constructs a sequence {Qi}I i=1 using the following iterative update scheme initialized with Q0 = Q. During the ith iteration, each agent m carries out the following update: 2 = (1 η)Qi 1 + η h b T (m) i Qi 1 b T (m) i Q + e TL(Q) i . (11) In the above equation, η (0, 1) is the step size and b T (m) i Q := 1 z Z(m) i Tz Q, where Z(m) i is the minibatch of B i.i.d. random variables drawn according to Z, independently at each agent m for all iterations i. Each agent then sends a compressed version of the update, i.e., C Qm i 1 2 Qi 1; D, J , to the server, which uses them to compute the next iterate Qi = Qi 1 + 1 m=1 C Qm i 1 2 Qi 1; D, J , (12) and broadcast it to the clients. After I such updates, the obtained iterate QI is returned by the routine. A pseudocode of the REFINEESTIMATE routine is given in Algorithm 3 in Appendix A. 4.1.2 The Compression Operator The compressor, C ( ; D, J), used in the proposed algorithm Fed-DVR-Q is based on the popular stochastic quantization scheme. In addition to the input vector Q to be quantized, the quantizer C takes two input parameters D and J. D corresponds to an upper bound on ℓ norm of Q, i.e., Q D. J corresponds to the resolution of the compressor, i.e., number of bits used by the compressor to represent each coordinate of the output vector. The compressor first splits the interval [0, D] into 2J 1 intervals of equal length where 0 = d1 < d2, < d2J = D correspond to end points of the intervals. Each coordinate of Q is then separately quantized as follows. The value of the nth coordinate, C (Q)[n], is set to be djn 1 with probability djn Q[n] djn djn 1 and to djn with the remaining probability, where jn := min{j : dj < Q[i] dj+1}. It is straightforward to note that each coordinate of C (Q) can be represented using J bits. 4.1.3 Setting the parameters The desired convergence of the iterates {Q(k)} is obtained by carefully choosing the parameters of the sub-routine REFINEESTIMATE and the compression operator C . For all epochs k 1, we set the number of iterations I and the batch size B of REFINEESTIMATE and the number of bits J of the compressor C to be 2 η(1 γ) , 2 (1 γ))2 log( 8KI|S||A| δ ) and log2( 70 η(1 γ) 2 M log( 8KI|S||A| δ )) respectively. The total number of epochs is set to K = 1 2 log2( 1 1 γ ) + 1 2 log2( 1 (1 γ)ε2 ) . The recentering sample sizes Lk and bounds Dk are set to be the following functions of epoch index k: Lk := 19600 (1 γ)2 log 8KI|S||A| 4k if k K0 4k K0 if k > K0 ; Dk := 16 2 k where K0 = 1 2 log2( 1 1 γ ) . The piecewise definition of Lk is crucial to obtain the optimal dependence with respect to 1 1 γ , similar to the two-step procedure outlined in Wainwright [2019b]. 4.2 Performance Guarantees The following theorem characterizes the sample and communication complexity of Fed-DVR-Q. Theorem 2. Consider any δ (0, 1) and ε (0, 1]. Under the federated learning setup described in Section 2.1, the sample and communication complexities of the Fed-DVR-Q algorithm, when run with the choice of parameters described in Sec. 4.1.3 and a learning rate η (0, 1), satisfy the following relations for some universal constant C1 > 0: SC(Fed-DVR-Q; ε, M, δ) C1 ηM(1 γ)3ε2 log2 log 8KI|S||A| CCround(Fed-DVR-Q; ε, δ) 16 η(1 γ) log2 CCbit(Fed-DVR-Q; ε, δ) 32|S|A| η(1 γ) log2 2 M log 8KI|S||A| A proof of Theorem 2 can be found in Appendix C. A few implications of the theorem are in order. Optimal Sample-Communication complexity trade-off. As shown by the above theorem, Fed DVR-Q offers a linear speed up in the sample complexity with respect to the number of agents while simultaneously achieving the same order of communication complexity as dictated by the lower bound derived in Theorem 1, both in terms of frequency and bit level complexity. Moreover, Fed-DVR-Q is the first Federated Q-Learning algorithm that achieves a sample complexity with optimal dependence on all the salient parameters, i.e., |S|, |A| and 1 1 γ , in addition to linear speedup w.r.t. to number of agents and thereby bridges the existing gap between upper and lower bounds on sample complexity for Federated Q-learning. Thus, Theorem 1 and 2 together provide a characterization of optimal operating point of the sample-communication complexity trade-off in Federated Q-learning. Role of Minibatching. The commonly adopted approach in intermittent communication algorithm is to use a local update scheme that takes multiple small (i.e., B = O(1)), noisy updates between communication rounds, as evident from the algorithm design in Khodadadian et al. [2022], Woo et al. [2023] and even numerous FL algorithms for stochastic optimization Mc Mahan et al. [2017], Haddadpour et al. [2019], Khaled et al. [2020]. In Fed-DVR-Q, we replace the local update scheme of taking multiple small, noisy updates by a single, large update with smaller variance, obtained by averaging the noisy updates over a minibatch of samples. The use of updates with smaller variance in variance reduced Q-learning yields the algorithm its name. While both the approaches result in similar sample complexity guarantees, the local update scheme requires more frequent averaging across clients to ensure that the bias of the estimate, also commonly referred to as client drift , is not too large. On the other hand, the minibatching approach does not encounter the problem of bias accumulation from local updates and hence can afford more infrequent averaging allowing Fed-DVR-Q to achieve optimal order of communication complexity. Compression. Fed-DVR-Q is the first algorithm in Federated Q-Learning to analyze and establish communication complexity at the bit level. All existing studies on Federated RL focus only on the frequency of communication and assume transmission of real numbers with infinite bit precision. On the other hand, the our analysis provides a more holistic view point of communication complexity and provides bounds at the bit level, which is of great practical significance. While some recent other studies [Wang et al., 2023] also consider quantization in Federated RL, their objective is to understand the impact of message size on convergence with no constraint on the frequency of communication, unlike the holistic viewpoint adopted in this work. 5 Conclusion and Future Directions We presented a complete characterization of the sample-communication trade-off for Federated Q-learning algorithms with intermittent communication. We showed that no Federated Q-learning algorithm with intermittent communication can achieve a linear speedup with respect to the number of agents if its number of communication rounds are sublinear in 1 1 γ . We also proposed a new Federated Q-learning algorithm called Fed-DVR-Q that uses variance reduction along with minibatching to achieve optimal-order sample and communication complexities. In particular, we showed that Fed DVR-Q has a sample complexity of O( |S||A| M(1 γ)3ε2 ), which is order-optimal in all salient problem parameters, and a communication complexity of O( 1 1 γ ) rounds and O( |S||A| 1 γ ) bits. The results in this work raise several interesting questions that are worth exploring. While we focus on the tabular setting in this work, it is of great interest to investigate to the trade-off in other settings where we use function approximation to model the Q and V functions. Moreover, it is interesting to explore the trade-off in the finite horizon setting, where there is no discount factor. Furthermore, it is also worthwhile to explore if the communication complexity can be further reduced by going beyond the class of intermittent communication algorithms. Acknowledgement We would like to thank the anonymous reviewers for their constructive feedback. This work is supported in part by the grants NSF CCF-2007911, CCF-2106778, CNS-2148212, ECCS-2318441, ONR N00014-19-1-2404 and AFRL FA8750-20-2-0504, and in part by funds from federal agency and industry partners as specified in the Resilient & Intelligent Next G Systems (RINGS) program. M. Assran, J. Romoff, N. Ballas, J. Pineau, and M. Rabbat. Gossip-based actor-learner architectures for deep reinforcement learning. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems, volume 32, 2019. M. G. Azar, R. Munos, and H. J. Kappen. Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model. Machine Learning, 91(3):325 349, 2013. C. Beck and R. Srikant. Error bounds for constant step-size q-learning. Systems & Control Letters, 61(12):1203 1208, 2012. ISSN 0167-6911. V. S. Borkar and S. P. Meyn. The o.d.e. method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447 469, 2000. doi: 10.1137/S0363012997331639. M. Braverman, A. Garg, T. Ma, H. L. Nguyen, and D. P. Woodruff. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing, pages 1011 1020, 2016. T. Chen, K. Zhang, G. B. Giannakis, and T. Ba sar. Communication-efficient policy gradient methods for distributed reinforcement learning. IEEE Transactions on Control of Network Systems, 9(2): 917 929, 2021a. Z. Chen, S. T. Maguluri, S. Shakkottai, and K. Shanmugam. Finite-sample analysis of contractive stochastic approximation using smooth convex envelopes. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems, volume 33, pages 8223 8234, 2020. Z. Chen, S. T. Maguluri, S. Shakkottai, and K. Shanmugam. A lyapunov theory for finite-sample guarantees of asynchronous q-learning and td-learning variants, 2021b. Z. Chen, Y. Zhou, and R. Chen. Multi-agent off-policy tdc with near-optimal sample and communication complexity. In Proceedings of the 55th Asilomar Conference on Signals, Systems, and Computers, pages 504 508, 2021c. Z. Chen, Y. Zhou, R.-R. Chen, and S. Zou. Sample and communication-efficient decentralized actorcritic algorithms with finite-time analysis. In Proceedings of the 39th International Conference on Machine Learning, pages 3794 3834. PMLR, 2022. T. Doan, S. Maguluri, and J. Romberg. Finite-time analysis of distributed td (0) with linear function approximation on multi-agent reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, pages 1626 1635. PMLR, 2019. T. T. Doan, S. T. Maguluri, and J. Romberg. Finite-time performance of distributed temporaldifference learning with linear function approximation. SIAM Journal on Mathematics of Data Science, 3(1):298 320, 2021. J. C. Duchi, M. I. Jordan, M. J. Wainwright, and Y. Zhang. Optimality guarantees for distributed statistical estimation, 2014. URL http://arxiv.org/abs/1405.0782. L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In Proceedings of the 35th International conference on machine learning, pages 1407 1416. PMLR, 2018. E. Even-Dar and Y. Mansour. Learning rates for q-learning. Journal of Machine Learning Research, 5, 2004. ISSN 1532-4435. D. A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3(1):100 118, 1975. F. Haddadpour, M. M. Kamani, M. Mahdavi, and V. R. Cadambe. Local SGD with periodic averaging: Tighter analysis and adaptive synchronization. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems, volume 32, 2019. H. v. Hasselt. Double q-learning. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, page 2613 2621. Curran Associates Inc., 2010. T. Jaakkola, M. Jordan, and S. Singh. Convergence of stochastic iterative dynamic programming algorithms. In Proceedings of the 7th Annual Conference on Neural Information Processing Systems, volume 6, 1993. H. Jin, Y. Peng, W. Yang, S. Wang, and Z. Zhang. Federated reinforcement learning with environment heterogeneity. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, pages 18 37. PMLR, 2022. S. M. Kakade. A natural policy gradient. Proceedings of the 15th Annual Conference on Neural Information Processing Systems, 14, 2001. M. Kearns and S. Singh. Finite-sample convergence rates for q-learning and indirect algorithms. In Proceedings of the 12th Annual Conference on Neural Information Processing Systems, 1998. A. Khaled, K. Mishchenko, and P. Richtárik. Tighter Theory for Local SGD on Identical and Heterogeneous Data. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, AISTATS, pages 4519 4529. PMLR, 2020. URL http://arxiv.org/abs/1909. 04746. S. Khodadadian, P. Sharma, G. Joshi, and S. T. Maguluri. Federated reinforcement learning: Linear speedup under markovian sampling. In Proceedings of the 39th International Conference on Machine Learning, pages 10997 11057. PMLR, 2022. J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238 1274, 2013. G. Lan, D.-J. Han, A. Hashemi, V. Aggarwal, and C. G. Brinton. Asynchronous federated reinforcement learning with policy gradient updates: Algorithm design and convergence analysis, 2024. G. Li, Y. Wei, Y. Chi, Y. Gu, and Y. Chen. Sample complexity of asynchronous q-learning: Sharper analysis and variance reduction. IEEE Transactions on Information Theory, 68(1):448 473, 2021. G. Li, C. Cai, Y. Chen, Y. Wei, and Y. Chi. Is q-learning minimax optimal? a tight sample complexity analysis. Operations Research, 2023. H.-K. Lim, J.-B. Kim, J.-S. Heo, and Y.-H. Han. Federated reinforcement learning for training control policies on multiple iot devices. Sensors, 20(5), 2020. ISSN 1424-8220. doi: 10.3390/s20051359. R. Liu and A. Olshevsky. Distributed TD(0) with almost no communication. IEEE Control Systems Letters, 7:2892 2897, 2023. B. Mc Mahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS, pages 1273 1282. PMLR, 2017. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pages 1928 1937. PMLR, 2016. M. Puterman. Markov decision processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2014. G. Qu and A. Wierman. Finite-time analysis of asynchronous stochastic approximation and q-learning. In Proceedings of the 33rd Conference on Learning Theory, pages 3185 3205. PMLR, 2020. S. Salgia and Q. Zhao. Distributed linear bandits under communication constraints. In Proceedings of the 40th International Conference on Machine Learning, ICML, pages 29845 29875. PMLR, 2023. H. Shen, K. Zhang, M. Hong, and T. Chen. Towards understanding asynchronous advantage actorcritic: Convergence and linear speedup. IEEE Transactions on Signal Processing, 2023. C. Shi and C. Shen. Federated Multi-Armed Bandits. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pages 9603 9611, 2021. URL http://arxiv.org/abs/2101.12204. D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershalvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484 489, 2016. J. Sun, G. Wang, G. B. Giannakis, Q. Yang, and Z. Yang. Finite-time analysis of decentralized temporal-difference learning with linear function approximation. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, pages 4485 4495. PMLR, 2020. R. Sutton and A. Barton. Reinforcement learning: An introduction. MIT Press, 2018. C. Szepesvári. The asymptotic convergence-rate of q-learning. Proceedings of the 11th Annual Conference on Neural Information Processing Systems, 10, 1997. H. Tian, I. C. Paschalidis, and A. Olshevsky. One-shot averaging for distributed TD (λ) under Markov sampling. IEEE Control Systems Letters, 2024. J. N. Tsitsiklis. Asynchronous stochastic approximation and q-learning. Machine learning, 16: 185 202, 1994. J. N. Tsitsiklis and Z. Q. Luo. Communication complexity of convex optimization. Journal of Complexity, 3(3):231 243, 1987. ISSN 10902708. doi: 10.1016/0885-064X(87)90013-6. R. Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. H.-T. Wai. On the convergence of consensus algorithms with markovian noise and gradient bias. In Proceedings of 59th IEEE Conference on Decision and Control, pages 4897 4902. IEEE, 2020. M. Wainwright. Stochastic approximation with cone-contractive operators: Sharp l-infinity-bounds for q -learning, 2019a. M. Wainwright. Variance-reduced q-learning is minimax optimal, 2019b. G. Wang, S. Lu, G. Giannakis, G. Tesauro, and J. Sun. Decentralized td tracking with linear function approximation and its finite-time analysis. Proceedings of the 34th Annual Conference on Neural Information Processing Systems, 33:13762 13772, 2020. H. Wang, A. Mitra, H. Hassani, G. J. Pappas, and J. Anderson. Federated temporal difference learning with linear function approximation under environmental heterogeneity, 2023. C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8:279 292, 1992. J. Woo, G. Joshi, and Y. Chi. The blessing of heterogeneity in federated q-learning: Linear speedup and beyond. In Proceedings of the 40th International Conference on Machine Learning, page 37157 37216, 2023. J. Woo, L. Shi, G. Joshi, and Y. Chi. Federated offline reinforcement learning: Collaborative single-policy coverage suffices, 2024. B. Woodworth, J. Wang, A. Smith, B. Mc Mahan, and N. Srebro. Graph oracle models, lower bounds, and gaps for parallel stochastic optimization. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems, volume 31, 2018. B. Woodworth, B. Bullins, O. Shamir, and N. Srebro. The min-max complexity of distributed stochastic convex optimization with intermittent communication. In Proceedings of the 34th Conference on Learning Theory, COLT, pages 4386 4437. PMLR, 2021. Z. Xie and S. Song. Fedkl: Tackling data heterogeneity in federated reinforcement learning by penalizing kl divergence. IEEE Journal on Selected Areas in Communications, 41(4):1227 1242, 2023. T. Yang, S. Cen, Y. Wei, Y. Chen, and Y. Chi. Federated natural policy gradient methods for multi-task reinforcement learning, 2023. E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda. A survey of autonomous driving: Common practices and emerging technologies. IEEE access, 8:58443 58469, 2020. S. Zeng, M. A. Anwar, T. T. Doan, A. Raychowdhury, and J. Romberg. A decentralized policy gradient approach to multi-task reinforcement learning. In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, UAI, pages 1002 1012. PMLR, 2021a. S. Zeng, T. T. Doan, and J. Romberg. Finite-time analysis of decentralized stochastic approximation with applications in multi-agent and multi-task learning. In Proceedings of the 60th IEEE Conference on Decision and Control, pages 2641 2646. IEEE, 2021b. C. Zhang, H. Wang, A. Mitra, and J. Anderson. Finite-time analysis of on-policy heterogeneous federated reinforcement learning, 2024. A Additional details about REFINEESTIMATE We outline below the pseudocode of the REFINEESTIMATE routine described in Sec. 4.1.1. Algorithm 3: REFINEESTIMATE(Q, B, I, L, D, J) 1: Input: Initial estimate Q, batch size B, Number of iterations I, recentering sample size L, quantization bound D, message size J 2: // Build an approximation for T Q which is to be used for variance reduced updates 3: for m = 1, 2, . . . , M do 4: Draw L/M i.i.d. samples from the generative model for each (s, a) pair and evaluate e T (m) L (Q) according to Eqn. (9) 5: Send C (e T (m) L (Q) Q; D, J) to the server 6: Receive 1 M PM m=1 C (e T (m) L (Q) Q; D, J) from the server and compute e TL(Q) according to Eqn. (10) 7: end for 8: Q0 Q 9: // Variance reduced updates with minibatching 10: for i = 1, 2, . . . , I do 11: for m = 1, 2, . . . , M do 12: Draw B i.i.d. samples from the from the generative model for each (s, a) pair 13: Compute Qm i 1 2 according to Eqn. (11) 14: Send C (Qm i 1 2 Qi 1; D, J) to the server 15: Receive 1 M PM m=1 C (Qm i Qi 1; D, J) from the server and compute Qi according to Eqn. (12) 16: end for 17: end for 18: return QI B Proof of Theorem 1 In this section, we prove the main result of the paper, the lower bound on the communication complexity of federated Q-learning algorithms. At a high level, the proof consists of the following three steps. Introducing the hard MDP instance. The proof builds upon analyzing the behavior of a generic algorithm A outlined in Algorithm 1 over a particular instance of MDP. The particular choice of MDP is inspired by, and borrowed from, other lower bound proofs in the single-agent setting [Li et al., 2023] and helps highlight core issues that lie at the heart of the sample-communication complexity trade-off. Following Li et al. [2023], the construction is first over a small state-action space that allows us to focus on a simpler problem before generalizing it to larger state-action spaces. Establishing the performance of intermittent communication algorithms. In the second step, we analyze the error of the iterates generated by an intermittent communication algorithm A . The analysis is inspired by the single-agent analysis in Li et al. [2023], which highlights the underlying bias-variance trade-off. Through careful analysis of the algorithm dynamics in the federated setting, we uncover the impact of communication on the bias-variance trade-off and the resulting error of the iterates to obtain the lower bound on the communication complexity. Generalization to larger MDPs. As the last step, we generalize our construction of the hard instance to more general state-action space and extend our insights to obtain the statement of the theorem. B.1 Introducing the hard instance We first introduce an MDP instance Mh that we will use throughout the proof to establish the result. Note that this MDP is identical to the one considered in Li et al. [2023] to establish the lower bounds on the performance of single-agent Q-learning algorithm. It consists of four states S = {0, 1, 2, 3}. Let As denote the action set associated with the state s. The probability transition kernel and the reward function of Mh is given as follows: A0 = {1} P(0|0, 1) = 1 r(0, 1) = 0, (14a) A1 = {1, 2} P(1|1, 1) = p P(0|1, 1) = 1 p r(1, 1) = 1, (14b) P(1|1, 2) = p P(0|1, 2) = 1 p r(1, 2) = 1, (14c) A2 = {1} P(2|2, 1) = p P(0|2, 1) = 1 p r(2, 1) = 1, (14d) A3 = {1} P(3|3, 1) = 1 r(3, 1) = 1, (14e) where the parameter p = 4γ 1 3γ . We have the following results about the optimal Q and V functions of this hard MDP instance. Lemma 1 ([Li et al., 2023, Lemma 3]). Consider the MDP Mh constructed in Eqn. (14). We have, V (0) = Q (0, 1) = 0 V (1) = Q (1, 1) = Q (1, 2) = V (2) = Q (2, 1) = 1 1 γp = 3 4(1 γ) V (3) = Q (3, 1) = 1 1 γ . Throughout the next section of the proof, we focus on this MDP with four states and two actions. In Appendix B.4, we generalize the proof to larger state-action spaces. B.2 Notation and preliminary results For convenience, we first define some notation that will be used throughout the proof. Useful relations of the learning rates. We consider two kinds of step size sequences that are commonly used in Q-learning the constant step size schedule, i.e., ηt = η for all t {1, 2, . . . , T} and the rescaled linear step size schedule, i.e., ηt = 1 1+cη(1 γ)t, where cη > 0 is a universal constant that is independent of the problem parameters. We define the following quantities: η(t) k = ηk i=k+1 (1 ηi(1 γp)) for all 0 k t, (15) where we take η0 = 1 and use the convention throughout the proof that if a product operation does not have a valid index, we take the value of that product to be 1. For any integer 0 τ < t, we have the following relation, which will be proved at the end of this subsection for completeness: k=τ+1 (1 ηk(1 γp)) + (1 γp) k=τ+1 η(t) k = 1. (16) Similarly, we also define, eη(t) k = ηk i=k+1 (1 ηi) for all 0 k t, (17) which satisfies the relation k=τ+1 (1 ηk) + k=τ+1 eη(t) k = 1. (18) for any integer 0 τ < t. The claim follows immediately by plugging p = 0 in (16). Note that for constant step size, the sequence eη(t) k is clearly increasing. For the rescaled linear step size, we have, eη(t) k 1 eη(t) k = ηk ηk 1(1 ηk) = 1 (1 cη(1 γ))ηk 1 cη(1 γ)ηk 1 (19) whenever cη 1 1 γ . Thus, eη(t) k is an increasing sequence as long as cη 1 1 γ . Similarly, η(t) k is also clearly increasing for the constant step size schedule. For the rescaled linear step size schedule, we have, η(t) k 1 η(t) k = ηk ηk 1(1 ηk(1 γp)) ηk ηk 1(1 ηk) 1, whenever cη 1 1 γ . The last bound follows from Eqn. (19). Proof of (16). We can show the claim using backward induction. For the base case, note that, (1 γp)η(t) t + (1 γp)η(t) t 1 = (1 γp)ηt + (1 γp)ηt 1(1 (1 γp)ηt) = 1 (1 ηt(1 γp))(1 ηt 1(1 γp)) = 1 k=t 1 (1 ηk(1 γp)), as required. Assume (16) is true for some τ. We have, k=τ η(t) k = (1 γp)ηt τ + (1 γp) k=τ+1 η(t) k k=τ+1 (1 ηk(1 γp)) + 1 k=τ+1 (1 ηk(1 γp)) k=τ (1 ηk(1 γp)), thus completing the induction step. Sample transition matrix. Recall Z S|S||A| is a random vector whose (s, a)-th coordinate is drawn from the distribution P( |s, a). We use b P m t to denote the sample transition at time t and agent m obtained by averaging B i.i.d. samples from the generative model. Specifically let {Zm t,b}B b=1 denote a collection of B i.i.d. copies of Z collected at time t at agent m. Then, for all s, a, s , b P m t (s |s, a) = 1 b=1 P m t,b(s |s, a), (20) where P m t,b(s |s, a) = 1{Zm t,b(s, a) = s } for s S. Preliminary relations of the iterates. We state some preliminary relations regarding the evolution of the Q-function and the value function across different agents that will be helpful for the analysis later. We begin with the state 0, where we have Qm t (0, 1) = V m t (0) = 0 for all agents m [M] and t [T]. This follows almost immediately from the fact that state 0 is an absorbing state with zero reward. Note that Qm 0 (0, 1) = V m 0 (0) = 0 holds for all clients m [M]. Assuming that Qm t 1(0, 1) = V m t 1(0) = 0 for all clients for some time instant t 1, by induction, we have, Qm t 1/2(0, 1) = (1 ηt)Qm t 1(0, 1) + ηt(γV m t 1(0)) = 0. Consequently, Qm t (0, 1) = 0 and V m t (0) = 0, for all agents m, irrespective of whether there is averaging. For state 3, the iterates satisfy the following relation: Qm t 1/2(3, 1) = (1 ηt)Qm t 1(3, 1) + ηt(1 + γV m t 1(3)) = (1 ηt)Qm t 1(3, 1) + ηt(1 + γQm t 1(3, 1)) = (1 ηt(1 γ))Qm t 1(3, 1) + ηt, where the second step follows by noting V m t (3) = Qm t (3, 1). Once again, one can note that averaging step does not affect the update rule implying that the following holds for all m [M] and t [T]: V m t (3) = Qm t (3, 1) = i=k+1 (1 ηi(1 γ)) i=1 (1 ηi(1 γ)) where the last step follows from Eqn. (16) with p = 1. Similarly, for state 1 and 2, we have, Qm t 1/2(1, 1) = (1 ηt)Qm t 1(1, 1) + ηt(1 + γ b P m t (1|1, 1)V m t 1(1)), (22) Qm t 1/2(1, 2) = (1 ηt)Qm t 1(1, 2) + ηt(1 + γ b P m t (1|1, 2)V m t 1(1)), (23) Qm t 1/2(2, 1) = (1 ηt)Qm t 1(2, 1) + ηt(1 + γ b P m t (2|2, 1)V m t 1(2)). (24) Since the averaging makes a difference in the update rule, we further analyze the update as required in later proofs. B.3 Main analysis We first focus on establishing a bound on the number of communication rounds, i.e., CCround(A ) (where we drop the dependency with other parameters for notational simplicity), and then use this lower bound to establish the bound on the bit level communication complexity CCbit(A ). To establish the lower bound on CCround(A ) for any intermittent communication algorithm A , we analyze the convergence behavior of A on the MDP Mh. We assume that the averaging step in line 6 of Algorithm 1 is carried out exactly. Since the use of compression only makes the problem harder, it is sufficient for us to consider the case where there is no loss of information in the averaging step for establishing a lower bound. Lastly, throughout the proof, without loss of generality we assume that log N 1 1 γ , (25) otherwise, the lower bound in Theorem 1 reduces to the trivial lower bound. We divide the proof into following three parts based on the choice of learning rates and batch sizes: 1. Small learning rates: For constant learning rates, 0 η < 1 (1 γ)T and for rescaled linear learning rates, the constant cη satisfies cη log T. 2. Large learning rates with small ηT /(BM): For constant learning rates, η 1 (1 γ)T and for rescaled linear learning rates, the constant cη satisfies 0 cη log T 1 1 γ (c.f. (25)). Additionally, the ratio ηT BM satisfies ηT BM 1 γ 3. Large learning rates with large ηT /(BM): We have the same condition on the learning rates as above. However, in this case the ratio ηT BM satisfies ηT BM > 1 γ We consider each of the cases separately in the following three subsections. B.3.1 Small learning rates In this subsection, we prove the lower bound for small learning rates, which follow from similar arguments in Li et al. [2023]. For this case, we focus on the dynamics of state 2. We claim that the same relation established in Li et al. [2023] continues to hold, which will be established momentarily: E[V m T (2)] = j=1 E[V j T (2)] k=1 η(t) k = 1 η(T ) 0 1 γp . (26) Consequently, for all m [M], we have V (2) E[V m T (2)] = η(T ) 0 1 γp. (27) To obtain lower bound on V (2) E[V m T (2)], we need to obtain a lower bound on η(T ) 0 , which from [Li et al., 2023, Eqn. (120)] we have log(η(T ) 0 ) 1.5 t=1 η(1 γp) 2 1 t log T 2 = η(T ) 0 e 2 when T 16 for both choices of learning rates. On plugging this bound in (27), we obtain, E[ Qm T Q ] E[|Q (2) Qm T (2)|] V (2) E[V m T (2)] 3 4e2(1 γ) holds for all m [M], N 1 and M 2. Thus, it can be noted that the error rate ER(A ; N, M) is bounded away from a constant value irrespective of the number of agents and the number of communication rounds. Thus, even with CCround = Ω(T), we will not observe any collaborative gain if the step size is too small. Proof of (26). Recall that from (24), we have, Qm t 1/2(2, 1) = (1 ηt)V m t 1(2) + ηt(1 + γ b P m t (2|2, 1)V m t 1(2)). Here, Qm t 1(2, 1) = V m t 1(2) as the second state has only a single action. When t is not an averaging instant, we have, V m t (2) = Qm t (2, 1) = (1 ηt)V m t 1(2) + ηt(1 + γ b P m t (2|2, 1)V m t 1(2)). (29) On taking expectation on both sides of the equation, we obtain, E[V m t (2)] = (1 ηt)E[V m t 1(2)] + ηt(1 + γE[ b P m t (2|2, 1)V m t 1(2)]) = (1 ηt)E[V m t 1(2)] + ηt 1 + γE[ b P m t (2|2, 1)]E[V m t 1(2)] = (1 ηt)E[V m t 1(2)] + ηt 1 + γp E[V m t 1(2)] = (1 ηt(1 γp))E[V m t 1(2)] + ηt. (30) In the second step, we used the fact that b P m t (2|2, 1) is independent of V m t 1(2). Similarly, if t is an averaging instant, we have, V m t (2) = Qm t (2, 1) = 1 j=1 Qj t 1/2(2, 1) j=1 V j t 1(2) + 1 j=1 ηt(1 + γ b P j t (2|2, 1)V j t 1(2)). (31) Once again, upon taking expectation we obtain, E[V m t (2)] = (1 ηt) 1 j=1 E[V j t 1(2)] + 1 j=1 ηt(1 + γE[ b P j t (2|2, 1)V j t 1(2)]) j=1 E[V j t 1(2)] + 1 j=1 ηt(1 + γp E[V j t 1(2)]) = (1 ηt(1 γp)) j=1 E[V j t 1(2)] Eqns. (30) and (32) together imply that for all t [T], 1 M m=1 E[V m t (2)] = (1 ηt(1 γp)) m=1 E[V m t 1(2)] On unrolling the above recursion with V m 0 = 0 for all m [M], we obtain the desired claim (26). B.3.2 Large learning rates with small ηT BM In this subsection, we prove the lower bound for case of large learning rates when the ratio ηT BM is small. For the analysis in this part, we focus on the dynamics of state 1. Unless otherwise specified, throughout the section we implicitly assume that the state is 1. We further define a key parameter that will play a key role in the analysis: τ := min{k N : t k, ηt ηk 3ηt}. (34) It can be noted that for constant step size sequence τ = 1 and for rescaled linear stepsize τ = T/3. Step 1: introducing an auxiliary sequence. We define an auxiliary sequence b Qm t (a) for a {1, 2} and all t = 1, 2, . . . , T to aid our analysis, where we drop the dependency with state s = 1 for simplicity. The evolution of the sequence b Qm t is defined in Algorithm 4, where b V m t = maxa {1,2} b Qm t (a). In other words, the iterates { b Qm t } evolve exactly as the iterates of Algorithm 1 except for the fact that sequence { b Qm t } is initialized at the optimal Q-function of the MDP. We would like to point out that we assume that the underlying stochasticity is also identical in the sense that the evolution of both Qm t and b Qm t is governed by the same b P m t matrices. The following lemma controls the error between the iterates Qm t and b Qm t , allowing us to focus only on b Qm t . Algorithm 4: Evolution of b Q 1: Input : T, R, {ηt}T t=1, C = {tr}R r=1, B 2: Set b Qm 0 (a) Q (1, a) for a {1, 2} and all agents m // Different initialization 3: for t = 1, 2, . . . , T do 4: for m = 1, 2, . . . , M do 5: Compute b Qm t 1 2 according to Eqn. (7) 6: Compute b Qm t according to Eqn. (8) 7: end for 8: end for Lemma 2. The following relation holds for all agents m [M], all t [T] and a {1, 2}: Qm t (1, a) b Qm t (a) 1 1 γ i=1 (1 ηi(1 γ)). By Lemma 2, bounding the error of the sequence b Qm t allows us to obtain a bound on the error of Qm t . To that effect, we define the following terms for any t T and all m [M]: m t (a) := b Qm t (a) Q (1, a); a = 1, 2; m t,max = max a {1,2} m t (a). In addition, we use t = 1 M PM m=1 m t to denote the error of the averaged iterate1, and similarly, t,max := max a {1,2} t(a). (35) We first derive a basic recursion regarding m t (a). From the iterative update rule in Algorithm 4, we have, m t (a) = (1 ηt) m t 1(a) + ηt(1 + γ b P m t (1|1, a)b V m t 1 Q (1, a)) = (1 ηt) m t 1(a) + ηtγ( b P m t (1|1, a)b V m t 1 p V (1)) = (1 ηt) m t 1(a) + ηtγ(p(b V m t 1 V (1)) + ( b P m t (1|1, a) p)b Vt 1) = (1 ηt) m t 1(a) + ηtγ(p m t 1,max + ( b P m t (1|1, a) p)b V m t 1). Here in the last line, we used the following relation: m t,max = max a {1,2}( b Qm t (a) Q (1, a)) = max a {1,2} b Qm t (a) V (1) = b V m t 1 V (1), as Q (1, 1) = Q (1, 2) = V (1). Recursively unrolling the above expression, and using the expression (17), we obtain the following relation: for any t < t when there is no averaging during the interval (t , t) k=t +1 (1 ηk) k=t +1 eη(t) k γ(p m k 1,max + ( b P m k (1|1, a) p)b V m k 1). (36) For any t , t with t < t, we define the notation k=t +1 (1 ηk), (37) ξm t ,t(a) := k=t +1 eη(t) k γ( b P m k (1|1, a) p)b V m k 1, a = 1, 2; (38) ξm t ,t,max := max a {1,2} ξm t ,t(a). (39) Note that by definition, E[ξm t ,t(a)] = 0 for a {1, 2} and all m, t and t. Plugging them into the previous expression leads to the simplified expression m t (a) = φt ,t m t (a) + k=t +1 eη(t) k γp m k 1,max + ξm t ,t(a). We specifically choose t and t to be the consecutive averaging instants to analyze the behaviour of m t across two averaging instants. Consequently, we can rewrite the above equation as m t (a) = φt ,t t (a) + k=t +1 eη(t) k γp m k 1,max + ξm t ,t(a). (40) Furthermore, after averaging, we obtain, t(a) = φt ,t t (a) + 1 k=t +1 eη(t) k γp m k 1,max m=1 ξm t ,t(a). (41) 1We use this different notation in appendix as opposed to the half-time indices used in the main text to improve readability of the proof. Step 2: deriving a recursive bound for E[ t,max]. Bounding (40), we obtain, m t,max φt ,t t ,max + k=t +1 eη(t) k γp m k 1,max + ξm t ,t,max φt ,t| t (1) t (2)|, (42a) m t,max φt ,t t ,max + k=t +1 eη(t) k γp m k 1,max + ξm t ,t,max, (42b) where in the first step we used the fact that max{a1 + b1, a2 + b2} min{a1, a2} + max{b1, b2} = max{a1, a2} + max{b1, b2} |a1 a2|. (43) On taking expectation, we obtain, E[ m t,max] φt ,t E[ t ,max] + k=t +1 eη(t) k γp E[ m k 1,max] + E[ξm t ,t,max] φt ,t E[| t (1) t (2)|], E[ m t,max] φt ,t E[ t ,max] + k=t +1 eη(t) k γp E[ m k 1,max] + E[ξm t ,t,max]. (44b) Similarly, using (41) and (43) we can write, t,max φt ,t t ,max + 1 k=t +1 eη(t) k γp m k 1,max φt ,t| t (1) t (2)| m=1 ξm t ,t(1), 1 m=1 ξm t ,t(2) = E[ t,max] φt ,t E[ t ,max] + 1 k=t +1 eη(t) k γp E[ m k 1,max] φt ,t E[| t (1) t (2)|] m=1 ξm t ,t(1), 1 m=1 ξm t ,t(2) On combining (44b) and (45b), we obtain, E[ t,max] 1 E[ m t,max] E[ξm t ,t,max] φt ,t E[| t (1) t (2)|] m=1 ξm t ,t(1), 1 m=1 ξm t ,t(2) In order to simplify (46), we make use of the following lemmas. Lemma 3. Let t < t be two consecutive averaging instants. Then for all m [M], E[ m t,max] E[ξm t ,t,max] k=t +1 (1 ηk(1 γp)) E[ t ,max] + E[ξm t ,t,max] k=t +1 η(t) k 1 + φt ,t E[| t (1) t (2)|], where [x]+ = max{x, 0}. Lemma 4. For all consecutive averaging instants t , t satisfying t max{t , τ} 1/ητ and all m [M], we have, E[ξm t ,t,max] 1 240 log 180B ηT (1 γ) ν ν + 1, m=1 ξm t ,t(1), 1 m=1 ξm t ,t(2) 240 log 180BM ηT (1 γ) ν ν + where ν := r 20ηT B(1 γ). Lemma 5. For all t {tr}R r=1, we have E[| t(1) t(2)|] 8ηT 3BM(1 γ). Thus, on combining the results from Lemmas 3, 4, and 5 and plugging them into (46), we obtain the following relation for t, t τ: k=t +1 (1 ηk(1 γp)) E[ t ,max] + E[ξm t ,t,max] k=t +1 η(t) k 1 2φt ,t E[| t (1) t (2)|] + E m=1 ξm t ,t(1), 1 m=1 ξm t ,t(2) (1 ητ(1 γp))t t E[ t ,max] + 1 (1 ητ(1 γp))t t 5760 log 180B ηT (1 γ) (1 γp) ν ν + 1 1 t t 8 2(1 ηT )t t s 8ηT 3BM(1 γ) + 1 240 log 180BM ηT (1 γ) ν ν + where we used the relation φt ,t (1 ηT )t t , as well as the value of ν as defined in Lemma 4 along with the fact k=t +1 η(t) k 1 1 (1 ητ(1 γp))t t 24(1 γp) (48) for all t, t τ such that t t 8/ητ. Proof of (48). We have, k=t +1 η(t) k 1 = i=k+1 (1 ηi(1 γp)) i=k+1 (1 ητ(1 γp)) k=t +1 (1 ητ(1 γp))t k 1 1 (1 ητ(1 γp))t t 1 (1 ητ(1 γp))t t 3(1 γp) 1. (49) To show (48), it is sufficient to show that 1 (1 ητ(1 γp))t t 7 for t t 8/ητ. Thus, for t t 8/ητ we have, 1 (1 ητ(1 γp))t t 3(1 γp) 1 exp( ητ(1 γp) (t t )) 1 exp( 8(1 γp)) 3(1 γp) . (50) Since γ 5/6, 1 γp 2/9. For x 2/9, the function f(x) = 1 e 8x 3x 8/7, proving the claim. Step 3: lower bounding E[ T,max]. We are now interested in evaluating E[ T,max] based on the recursion (47). To this effect, we introduce some notation to simplify the presentation. Let Rτ := min{r : tr τ}. (51) For r = Rτ, . . . , R, we define the following terms: xr := E[ tr,max], αr := (1 ητ(1 γp))tr tr 1, βr := (1 ηT )tr tr 1, Ir := {r r > Rτ : tr tr 1 8/ητ}, 5760 log 180B ηT (1 γ) (1 γp) ν ν + 1, 32ηT 3BM(1 γ), 240 log 180BM ηT (1 γ) ν ν + With these notations in place, the recursion in (47) can be rewritten as xr αrxr 1 βr C2 + C31{r Ir} + (1 αr)C11{r Ir}, (52) for all r Rτ. We claim that xr satisfies the following relation for all r Rτ + 1 (whose proof is deferred to the end of this step): where we recall that if there is no valid index for a product, its value is taken to be 1. Invoking (53) for r = R and using the relation x Rτ 1 0, we obtain, C31{k Ik} + C1 32ηT 3BM(1 γ) + 5760 log 180B ηT (1 γ) (1 γp) ν ν + 1, where we used the fact βk QR i=k+1 αi 1 and that C3 0. Consider the expression Y i/ IR αi = Y i/ IR (1 ητ(1 γp))ti ti 1 1 ητ(1 γp) X i/ IR (ti ti 1) | {z } =:T1 Consequently, = 1 (1 ητ(1 γp))T τ T1 1 exp ( ητ(1 γp) (T τ T1)) . (56) Note that T1 satisfies the following bound i/ IR (ti ti 1) (R |IR|) 8 We split the remainder of the analysis based on the step size schedule. For the constant step size schedule, i.e., ηt = η 1 (1 γ)T , we have, Rτ = 0, with τ = 0 and t0 = 0 (as all agents start at the same point). If R 1 96000(1 γ) log( 180B η(1 γ)), then, (55), (56) and (57) yield the following relations: η T 12000 log(180N), i/ IR αi 1 η(1 γp) T1 1 32R(1 γ) 3 1 1 9000 log(180N), 1 exp ( η(1 γp) (T T1)) 1 exp 4 1 1 9000 log(180N) On plugging the above relations into (54), we obtain 96000 log 180B η(1 γ) (1 γ) ν ν + 1 ν 5 where recall that ν := r 20η 3B(1 γ). Consider the function f(x) = x x+1 x 5 claim that for x [0, M] and all M 2, 20 min{x, 1}. (59) The proof of the above claim is deferred to the end of the section. In light of the above claim, we have, 96000 log 180B η(1 γ) (1 γ) 7 20η 3B(1 γ) 40 96000 log (180N) 7 20 3(1 γ)4N where we used the fact that M 2, x log(1/x) is an increasing function and the relation ν M = 20η 3BM(1 γ) 1 Next, we consider the rescaled linear step size schedule, where τ = T/3 (cf. (34)). To begin, we assume t Rτ max{ 3T 4 , T 1 6ητ (1 γp)}. It is straightforward to note that 4 , T 1 6ητ(1 γp) 4 if cη 3 T 1 6ητ (1 γp) if cη < 3. If R 1 384000(1 γ) log 180B ηT (1 γ) (5+cη) then, (55), (56) and (57) yield the following rela- i/ IR αi 1 ητ(1 γp) T1 1 32R(1 γ) 3 1 1 36000. For cη 3, we have, 1 exp ( ητ(1 γp) (T t Rτ T1)) 1 exp (1 γ)T (3 + cη(1 γ)T) + 32R(1 γ) 1 2(3 + cη), where we used T 1 1 γ in the second step. Similarly, for cη < 3, we have, 1 exp ( ητ(1 γp) (T t Rτ T1)) 6 + 32R(1 γ) On plugging the above relations into (54), we obtain 384000 log 180B ηT (1 γ) (1 γ)(5 + cη) ν ν + 1 ν 18 384000 log 180B ηT (1 γ) (1 γ)(5 + cη) 7 20ηT 3B(1 γ) 384000 log 180B ηT (1 γ) (5 + cη) 7 20ηT 3B(1 γ)3 1.6 384000 log (180N(1 + log N)) (5 + log N) 7 20 3B(1 + log N)(1 γ)4N where we again used the facts that M 2, cη log N, x log(1/x) is an increasing function and the relation ν M = 20ηT 3BM(1 γ) 1. Last but not least, let us consider the rescaled linear step size schedule case when t Rτ > max{ 3T 4 , T 1 6ητ (1 γp)}. The condition implies that the time between the communication rounds Rτ 1 and Rτ is at least T0 := max{ 5T 3 1 6ητ (1 γp)}. Thus, (47) yields that 1 (1 ητ(1 γp))T0 5760 log 180 BηT (1 γ) (1 γp) ν ν + 1 2(1 ηT )T0 s 8ηT 3BM(1 γ). Using the above relation along with (53), we can conclude that x R (1 ητ(1 γp))T t Rτ 1 (1 ητ(1 γp))T0 5760 log 180 BηT (1 γ) (1 γp) 2(1 ηT )T0 (1 ητ(1 γp))T t Rτ 8ηT 3BM(1 γ) RC2. (63) In the above relation, we used the trivial bounds C1, C3 0 and a crude bound on the term corresponding to C2, similar to (54). Let us first consider the case of cη 3. We have, 1 (1 ητ(1 γp))T0 1 exp ( ητ(1 γp)5T/12) 1 exp 5(1 γ)T 3(3 + cη(1 γ)T) (1 ητ(1 γp))T t Rτ 1 ητ(1 γp)T 4 1 (1 γ)T (3 + cη(1 γ)T) 1 1 Similarly, for cη < 3, we have, 1 (1 ητ(1 γp))T0 1 exp ητ(1 γp)2T 1 exp 8(1 γ)T 3(3 + cη(1 γ)T) + 1 (1 ητ(1 γp))T t Rτ 1 ητ(1 γp) 6ητ(1 γp) 5 The above relations implies that (1 ητ(1 γp))T t Rτ (1 (1 ητ(1 γp))T0) c for some constant c, which only depends on cη. On plugging this into (63), we obtain a relation that is identical to that in (54) up to leading constants. Thus, by using a similar sequence of argument as used to obtain (61), we arrive at the same conclusion as for the case of t Rτ max{ 3T 4 , T 1 6ητ (1 γp)}. Step 4: finishing up the proof. Thus, (60), (61) along with the above conclusion together imply that there exists a numerical constant c0 > 0 such that E[|b V m T (1) V (1)|] E[ T,max] c0 log3 N min The above equation along with Lemma 2 implies E[|V m T V (1)|] c0 log3 N min i=1 (1 ηi(1 γ)). (65) On the other hand, from (21) we know that E[|V m T (3) V (3)|] 1 1 γ i=1 (1 ηi(1 γ)). (66) E[ Qm T Q ] E [max {|V m T (3) V (3)|, |V m T (1) V (1)|}] max {E [|V m T (3) V (3)|] , E [|V m T (1) V (1)|]} i=1 (1 ηi(1 γ)), min i=1 (1 ηi(1 γ)) where the third step follows from (65) and (66) and the fourth step uses max{a, b} (a + b)/2. Thus, from (28) and (67) we can conclude that whenever CCround = O 1 (1 γ) log2 N , ER(A ; N, M) = Ω 1 log3 N for all values of M 2. In other words, for any algorithm to achieve any collaborative gain, its communication complexity should satisfy CCround = Ω 1 (1 γ) log2 N , as required. Proof of (53). We now return to establish (53) using induction. For the base case, (52) yields x Rτ +1 αRτ +1x Rτ βRτ +1C2 + C31{Rτ + 1 IRτ +1} + (1 αRτ +1)C11{Rτ + 1 IRτ +1}. (68) Note that this is identical to the expression in (53) for r = Rτ + 1 as i/ IRτ +1 αi i IRτ +1 αi = (1 αRτ +1)1{Rτ + 1 IRτ +1} based on the adopted convention for products with no valid indices. For the induction step, assume (53) holds for some r Rτ + 1. On combining (52) and (53), we obtain, xr+1 αr+1xr βr+1C2 + C31{(r + 1) Ir+1} + (1 αr+1)C11{r + 1 Ir+1} βr+1C2 + C31{(r + 1) Ir+1} + (1 αr+1)C11{(r + 1) Ir+1} + (1 αr+1)C11{(r + 1) Ir+1}. (69) If (r + 1) / Ir+1, then 1 Q i Ir αi = 1 Q i Ir+1 αi and αr+1 Q i/ Ir αi = Q i/ Ir+1 αi . Consequently, + (1 αr+1)C11{(r + 1) Ir+1} = C1 On the other hand, if (r + 1) Ir+1, then Q i/ Ir αi = Q i/ Ir+1 αi . Consequently, we have, + (1 αr+1)C11{(r + 1) Ir+1} + (1 αr+1)C1 Combining (69), (70) and (71) proves the claim. Proof of (59). To establish this result, we separately consider the cases x 1 and x 1. When x 1, we have f(x) = x x + 1 1 5 where in the last step, we used the relation M 2. Let us now consider the case x 1. The second derivative of f is given by f (x) = 1 2(x+1)3 . Clearly, for all x 1, f < 0 implying that f is a concave function. It is well-known that a continuous, bounded, concave function achieves its minimum values over a compact interval at the end points of the interval (Bauer s minimum principle). For all M 2, we have, Consequently, we can conclude that for all x [1, Combining (72) and (73) proves the claim. B.3.3 Large learning rates with large ηT BM In order to bound the error in this scenario, note that ηT BM controls the variance of the stochastic updates in the fixed point iteration. Thus, when ηT BM is large, the variance of the iterates is large, resulting in a large error. To demonstrate this effect, we focus on the dynamics of state 2. This part of the proof is similar to the large learning rate case of Li et al. [2023]. For all t [T], define: V t(2) := 1 m=1 V m t (2). (74) Thus, from (33), we know that E[V t(2)] obeys the following recursion: E[V t(2)] = (1 ηt(1 γp))E[V t 1(2)] + ηt. Upon unrolling the recursion, we obtain, E[V T (2)] = k=t+1 (1 ηk(1 γp)) E[V t(2)] + k=t+1 η(T ) k . Thus, the above relation along with (16) and the value of V (2) yields us, V (2) E[V T (2)] = k=t+1 (1 ηk(1 γp)) 1 1 γp E[V t(2)] . (75) Similar to Li et al. [2023], we define τ := min 0 t T 2 E[(V t)2] 1 4(1 γ)2 for all t + 1 t T . If such a τ does not exist, it implies that either E[(V T )2] < 1 4(1 γ)2 or E[(V T 1)2] < 1 4(1 γ)2 . If the former is true, then, V (2) E[V T (2)] = 3 4(1 γ) q E[(V T )2] > 1 4(1 γ). (76) Similarly, if E[(V T 1)2] < 1 4(1 γ)2 , it implies E[V T 1] < 1 2(1 γ). Using (33), we have, E[V T (2)] = (1 ηT (1 γp))E[V T 1(2)] + ηT E[V T 1(2)] + 1 < 1 2(1 γ) + 1 6(1 γ) = 2 3(1 γ). Consequently, V (2) E[V T (2)] > 3 4(1 γ) 2 3(1 γ) > 1 12(1 γ). (77) For the case when τ exists, we divide the proof into two cases. We first consider the case when the learning rates satisfy: k=τ +1 (1 ηk(1 γp)) 1 The analysis for this case is identical to that considered in Li et al. [2023]. We explicitly write the steps for completeness. Specifically, V (2) E[V T (2)] = k=τ +1 (1 ηk(1 γp)) ! 1 1 γp E[V τ (2)] 2 3 4(1 γ) q E[(V τ (2))2] 2 3 4(1 γ) 1 2(1 γ) 1 8(1 γ), (79) where the first line follows from (75), the second line from the condition on step sizes and the third line from the definition of τ . We now consider the other case where, k=τ +1 (1 ηk(1 γp)) < 1 Using [Li et al., 2023, Eqn.(134)], for any t < t and all agents m, we have the relation V m t (2) = 1 1 γp k=t +1 (1 ηk(1 γp)) 1 1 γp V m t (2) k=t +1 η(t) k γ( ˆP m k (2|2) p)V m k 1(2). The above equation is directly obtained by unrolling the recursion in (24) along with noting that Qt(2, 1) = Vt(2) for all t. Consequently, we have, V T (2) = 1 1 γp k=t +1 (1 ηk(1 γp)) 1 1 γp V t (2) k=t +1 η(T ) k γ( ˆP m k (2|2) p)V m k 1(2). (81) Let {Ft}T t=0 be a filtration such that Ft is the σ-algebra corresponding to {{ ˆP m s (2|2)}M m=1}t s=1. It is straightforward to note that n 1 M PM m=1 η(T ) k γ( ˆP m k (2|2) p)V m k 1(2) o k is a martingale sequence adapted to the filtration Fk. Thus, using the result from [Li et al., 2023, Eqn.(139)], we can conclude that Var(V T (2)) E m=1 η(T ) k γ( ˆP m k (2|2) p)V m k 1(2) Fk 1 m=1 η(T ) k γ( ˆP m k (2|2) p)V m k 1(2) Fk 1 m=1 Var η(T ) k γ( ˆP m k (2|2) p)V m k 1(2) Fk 1 = (η(T ) k )2 BM γ2p(1 p) m=1 (V m k 1(2))2 ! (1 γ)(4γ 1) 9BM (η(T ) k )2 (V k 1(2))2, (83) where the first line follows from that fact that variance of sum of i.i.d. random variables is the sum of their variances, the second line from variance of Binomial random variable and the third line from Jensen s inequality. Thus, (82) and (83) together yield, Var(V T (2)) (1 γ)(4γ 1) k=τ +2 (η(T ) k )2 E[(V k 1(2))2] (1 γ)(4γ 1) 9BM 1 4(1 γ)2 k=max{τ,τ }+2 (η(T ) k )2, (84) where the second line follows from the definition of τ . We focus on bounding the third term in the above relation. We have, k=max{τ ,τ}+2 k=max{τ ,τ}+2 i=k+1 (1 ηi(1 γp) k=max{τ ,τ}+2 i=k+1 (1 ητ(1 γp)) k=max{τ ,τ}+2 (1 ητ(1 γp))2(t k) η2 T 1 (1 ητ(1 γp))2(T max{τ ,τ} 1) ητ(1 γp)(2 ητ(1 γp)) ηT 1 4(1 γ) c , (85) where the second line follows from monotonicity of ηt and the numerical constant c in the fifth step is given by the following claim whose proof is deferred to the end of the section: 1 (1 ητ(1 γp))2(T max{τ ,τ} 1) ( 1 e 8/9 for constant step sizes, 1 exp 8 3 max{1,cη} for linearly rescaled step sizes . Thus, (84) and (85) together imply Var(V T (2)) (4γ 1) 36BM(1 γ) k=τ +2 (η(T ) k )2 144(1 γ) ηT BM(1 γ) c (4γ 1) 144(1 γ) 1 100, (87) where the last inequality follows from the bound on ηT BM . Thus, for all N 1, we have, E[(V (2) V T (2))2] = E[(V (2) E[V T (2)])2] + Var(V T (2)) c for some numerical constant c . Similar to the small learning rate case, the error rate is bounded away from a constant value irrespective of the number of agents and the number of communication rounds. Thus, even with CCround = Ω(T), we will not observe any collaborative gain in this scenario. Proof of (86). To establish the claim, we consider two cases: τ τ: Under this case, we have, (1 ητ(1 γp))2(T max{τ ,τ} 1) = (1 ητ(1 γp))2(T τ 1) (1 ητ(1 γp))T τ k=τ +1 (1 ηk(1 γp)) 1 where the last inequality follows from (80). τ τ : For this case, we have (1 ητ(1 γp))2(T max{τ ,τ} 1) = (1 ητ(1 γp))2(T τ 1) (1 ητ(1 γp))T τ exp 2Tητ(1 γp) For the constant stepsize schedule, we have, exp 2Tητ(1 γp) 3 1 (1 γ)T 4(1 γ) For linearly rescaled stepsize schedule, we have, exp 2Tητ(1 γp) 3 1 1 + cη(1 γ)T/3 4(1 γ) = exp 8 3 max{1, cη} On combining (88), (89), (90) and (91), we arrive at the claim. B.4 Generalizing to larger state action spaces We now elaborate on how we can extend the result to general state-action spaces along with the obtaining the lower bound on the bit level communication complexity. For the general case, we instead consider the following MDP. For the first four states {0, 1, 2, 3}, the probability transition kernel and reward function are given as follows. A0 = {1} P(0|0, 1) = 1 r(0, 1) = 0, (92a) A1 = {1, 2, . . . , |A|} P(1|1, a) = p P(0|1, a) = 1 p r(1, a) = 1, a A (92b) A2 = {1} P(2|2, 1) = p P(0|2, 1) = 1 p r(2, 1) = 1, (92c) A3 = {1} P(3|3, 1) = 1 r(3, 1) = 1, (92d) where the parameter p = 4γ 1 3γ . The overall MDP is obtained by creating |S|/4 copies of the above MDP for all sets of the form {4r, 4r+1, 4r+2, 4r+3} for r |S|/4 1. It is straightforward to note that the lower bound on the number of communication rounds immediately transfers to the general case as well. Moreover, note that the bound on CCround implies the bound CCbit = Ω 1 (1 γ) log2 N as every communication entails sending Ω(1) bits. To obtain the general lower bound on bit level communication complexity, note that we can carry out the analysis in the previous section for all |A|/2 pairs of actions in state 1 corresponding to the set of states {0, 1, 2, 3}. Moreover, the algorithm A , needs to ensure that the error is low across all the |A|/2 pairs. Since the errors are independent across all these pairs, each of them require Ω 1 (1 γ) log2 N bits of information to be transmitted during the learning horizon leading to a lower bound of Ω |A| (1 γ) log2 N . Note that since we require a low ℓ error, A needs to ensure that the error is low across all the pairs, resulting in a communication cost linearly growing with |A|. Upon repeating the argument across all |S|/4 copies of the MDP, we arrive at the lower bound of CCbit = Ω |S||A| (1 γ) log2 N . B.5 Proofs of auxiliary lemmas B.5.1 Proof of Lemma 2 Note that a similar relationship is also derived in Li et al. [2023], but needing to take care of the averaging over multiple agents, we present the entire arguments for completeness. We prove the claim using an induction over t. It is straightforward to note that the claim is true for t = 0 and all agents m {1, 2, . . . , M}. For the inductive step, we assume that the claim holds for t 1 for all clients. Using the induction hypothesis, we have the following relation between V m t 1(1) and b V m t 1: V m t 1(1) = max a {1,2} Qm t 1(1, a) max a {1,2} b Qm t 1(a) 1 1 γ i=1 (1 ηi(1 γ)) = b V m t 1 1 1 γ i=1 (1 ηi(1 γ)). For t / {tr}R r=1 and a {1, 2}, we have, Qm t (1, a) b Qm t (a) = Qm t 1/2(1, a) b Qm t 1/2(a) = (1 ηt)Qm t 1(1, a) + ηt(1 + γ b P m t (1|1, a)V m t 1(1)) h (1 ηt) b Qm t 1(a) + ηt(1 + γ b P m t (1|1, a)b V m t 1) i = (1 ηt)(Qm t 1(1|1, a) b Qm t 1(a)) + ηtγ b P m t (1|1, a)(V m t 1(1) b V m t 1) i=1 (1 ηi(1 γ)) b P m t (1|1, a) ηtγ i=1 (1 ηi(1 γ)) i=1 (1 ηi(1 γ)) ηtγ i=1 (1 ηi(1 γ)) i=1 (1 ηi(1 γ)). (94) For t {tr}R r=1 and a {1, 2}, we have, Qm t (1, a) b Qm t (a) = 1 m=1 Qm t 1/2(1, a) 1 m=1 b Qm t 1/2(a) h (1 ηt)Qm t 1(1, a) + ηt(1 + γ b P m t (1|1, a)V m t 1(1)) i h (1 ηt) b Qm t 1(a) + ηt(1 + γ b P m t (1|1, a)b V m t 1) i h (1 ηt)(Qm t 1(1, a) b Qm t 1(a)) + ηtγ b P m t (1|1, a)(V m t 1(1) b V m t 1) i i=1 (1 ηi(1 γ)), (95) where the last step follows using the same set of arguments as used in (94). The inductive step follows from (94) and (95). B.5.2 Proof of Lemma 3 In order to bound the term E[ m t,max] E[ξm t ,t,max], we make use of the relation in (44a), which we recall E[ m t,max] φt ,t E[ t ,max] + k=t +1 eη(t) k γp E[ m k 1,max] + E[ξm t ,t,max] φt ,t E[| t (1) t (2)|]. To aid the analysis, we consider the following recursive relation for any fixed agent m: yt = (1 ηt)yt 1 + ηt(γpyt 1 + E[ξm t ,t,max]). (96) Upon unrolling the recursion, we obtain, k=t +1 (1 ηk) i=k+1 (1 ηi) i=k+1 (1 ηi) E[ξm t ,t,max] = φt ,tyt + k=t +1 eη(t) k γpyk 1 + k=t +1 eη(t) k E[ξm t ,t,max]. (97) Initializing yt = E[ t ,max] in (97) and plugging this into (44a), we have E[ m t,max] yt φt ,t E[| t (1) t (2)|], where we used Pt k=t +1 eη(t) k 1 (cf. (18)). We now further simply the expression of yt. By rewriting (96) as yt = (1 ηt(1 γp))yt 1 + ηt E[ξm t ,t,max], it is straight forward to note that yt is given as k=t +1 (1 ηk(1 γp)) yt + E[ξm t ,t,max] k=t +1 η(t) k Consequently, we have, E[ m t,max] E[ξm t ,t,max] k=t +1 (1 ηk(1 γp)) + E[ξm t ,t,max] k=t +1 η(t) k 1 φt ,t E[| t (1) t (2)|]. We can consider a slightly different recursive sequence defined as wt = (1 ηt)wt 1 + ηt(γpwt 1). (100) Using a similar sequence of arguments as outlined in (96)-(98), we can conclude that if wt = E[ t ,max], then E[ m t,max] wt + E[ξm t ,t,max] φt ,t E[| t (1) t (2)|] and consequently, E[ m t,max] k=t +1 (1 ηk(1 γp)) E[ t ,max] + E[ξm t ,t,max] φt ,t E[| t (1) t (2)|]. On combining (99) and (101), we arrive at the claim. B.5.3 Proof of Lemma 4 We begin with bounding the first term E[ξm t ,t,max]; the second bound follows in an almost identical derivation. Step 1: applying Freedman s inequality. Using the relation max{a, b} = a+b+|a b| 2 , we can rewrite E[ξm t ,t,max] as E[ξm t ,t,max] = E ξm t ,t(1) + ξm t ,t(2) 2 + ξm t ,t(1) ξm t ,t(2) 2 2E ξm t ,t(1) ξm t ,t(2) 2 k=t +1 eη(t) k γ( b P m k (1|1, 1) b P m k (1|1, 2))b V m k 1 | {z } =:ζm t ,t where we used the definition in (38) and the fact that E[ξm t ,t(1)] = E[ξm t ,t(2)] = 0. Decompose ζm t ,t as b=1 eη(t) k γ B (P m k,b(1|1, 1) P m k,b(1|1, 2))b V m k 1 =: l=1 zl, (103) where for all 1 l L B (P m k(l),b(l)(1|1, 1) P m k(l),b(l)(1|1, 2))b V m k(l) 1 k(l) := l/B + t + 1; b(l) = ((l 1) mod B) + 1; L = (t t )B. Let {Fl}L l=1 be a filtration such that Fl is the σ-algebra corresponding to {P m k(j),b(j)(1|1, 1), P m k(j),b(j)(1|1, 2)}l j=1. It is straightforward to note that {zl}L l=1 is a martingale sequence adapted to the filtration {F}L l=1. We will use the Freedman s inequality [Freedman, 1975, Li et al., 2023] to obtain a high probability bound on |ζm t ,t|. To that effect, note that sup l |zl| sup l eη(t) k(l) γ B (P m k(l),b(l)(1|1, 1) P m k(l),b(l)(1|1, 2)) b V m k(l) 1 eη(t) k(l) γ B(1 γ) ηt B(1 γ), (104) where the second step follows from the bounds |(P m k(l),b(l)(1|1, 1) P m k(l),b(l)(1|1, 2))| 1 and b V m k(l) 1 1 1 γ and the third step uses cη 1 1 γ and the fact that eη(T ) k is increasing in k in this regime. (cf. (19)). Similarly, Var(zl|Fl 1) eη(t) k(l) 2 γ2 B2 b V m k(l) 1 2 Var(P m k(l),b(l)(1|1, 1) P m k(l),b(l)(1|1, 2)) eη(t) k(l) 2 γ2 B2(1 γ)2 2p(1 p) 2 eη(t) k(l) 2 3B2(1 γ). (105) Using the above bounds (104) and (105) along with Freedman s inequality yield that v u u t 8 log(2/δ) eη(t) k(l) 2 + 4ηt log(2/δ) Setting δ0 = (1 γ)2 2 E[|ζm t ,t|2], with probability at least 1 δ0, it holds v u u t8 log(2/δ0) eη(t) k 2 + 4ηt log(2/δ0) 3B(1 γ) =: D. (107) Consequently, plugging this back to (102), we obtain E[ξm t ,t,max] = 1 2E[|ζm t ,t|] 2E[|ζm t ,t|1{|ζm t ,t| D}] 1 2DE[|ζm t ,t|21{|ζm t ,t| D}] 1 2D E[|ζm t ,t|2] E[|ζm t ,t|21{|ζm t ,t| > D}] E[|ζm t ,t|2] Pr(|ζm t ,t| > D) (1 γ)2 1 4D E[|ζm t ,t|2]. (108) Here, the penultimate step used the fact that |ζm t ,t| eη(t) k (1 γ) 1 (1 γ), and the last step used the definition of δ0. Thus, it is sufficient to obtain a lower bound on E[|ζm t ,t|2] in order obtain a lower bound for E[ξm t ,t,max]. Step 2: lower bounding E[|ζm t ,t|2]. To proceed, we introduce the following lemma pertaining to lower bounding b V m t that will be useful later. Lemma 6. For all time instants t [T] and all agent m [M]: E b V m t 2 1 2(1 γ)2 . E[|ζm t ,t|2] = E l=1 Var (zl|Fl 1) l=1 E z2 l |Fl 1 # eη(t) k(l) 2 γ2 B2 2p(1 p) E ˆV m k(l) 1 2 eη(t) k(l) 2 γ2 B2 2p(1 p) 1 2(1 γ)2 k=max{t ,τ}+1 eη(t) k 2 , (109) where the third line follows from Lemma 6 and the fourth line uses γ 5/6. Step 3: finishing up. We finish up the proof by bounding Pt k=max{t ,τ}+1 eη(t) k 2 for t max{t , τ} 1/ητ. We have k=max{t ,τ}+1 k=max{t ,τ}+1 i=k+1 (1 ηi) k=max{t ,τ}+1 i=k+1 (1 ητ) k=max{t ,τ}+1 (1 ητ)2(t k) η2 t 1 (1 ητ)2(t max{t ,τ}) ηt 1 exp( 2) where (i) follows from the monotonicity of ηk. Plugging (110) into the expressions of D (cf. (107)) we have v u u t8 log(2/δ0) eη(t) k 2 + 4ηt log(2/δ0) 2E[|ζm t ,t|2] 8 log(2/δ0) eη(t) k 2 ! 1/2 + 60 E[|ζm t ,t|2] log(2/δ0) 3E[|ζm t ,t|2] log(2/δ0) 60E[|ζm t ,t|2] log(2/δ0) where the second line follows from (109) and (110), and the third line follows from (110). On combining the above bound with (108), we obtain, E[ξm t ,t,max] 1 240 log(2/δ0) ν ν + 1, (111) where ν := r 20ηT 3B(1 γ). Note that we have, δ0 = (1 γ)2 2 E[|ζm t ,t|2] (1 γ) eη(t) k 2 ηT (1 γ) Combining the above bound with (111) yields us the required bound. Step 4: repeating the argument for the second claim. We note that second claim in the theorem, i.e., the lower bound on E h max n 1 M PM m=1 ξm t ,t(1), 1 M PM m=1 ξm t ,t(2) oi follows through an identical series of arguments where the bounds in Eqns. (104) and (105) contain an additional factor of M in the denominator (effectively replacing B with BM), which is carried through in all the following steps. B.5.4 Proof of Lemma 5 Using Eqns. (41) and (38), we can write t(1) t(2) = k=t +1 (1 ηk) ( t (1) t (2)) i=k+1 (1 ηi) γ( b P m k (1|1, 1) b P m k (1|1, 2))b V m k 1. Upon unrolling the recursion, we obtain, t(1) t(2) = i=k+1 (1 ηi) ! γ M ( b P m k (1|1, 1) b P m k (1|1, 2))b V m k 1. If we define a filtration Fk as the σ-algebra corresponding to { b P 1 l (1|1, 1), b P 1 l (1|1, 2), . . . , b P M l (1|1, 1), b P M l (1|1, 2)}k l=1, then it is straightforward to note that { t(1) t(2)}t is a martingale sequence adapted to the filtration {Ft}t. Using Jensen s inequality, we know that if {Zt}t is a martingale adapted to a filtration {Gt}t, then for a convex function f such that f(Zt) is integrable for all t, {f(Zt)}t is a sub-martingale adapted to {Gt}t. Since f(x) = |x| is a convex function, {| t(1) t(2)|}t is a submartingale adapted to the filtration {Ft}t. As a result, sup 1 t T E[| t(1) t(2)|] E[| T (1) T (2)|] E[( T (1) T (2))2] 1/2 . (112) We use the following observation about a martingale sequence {Xi}t i=1 adapted to a filtration {Gi}t i=1 to evaluate the above expression. We have, = E X2 t + E i=1 E X2 i , (113) where the third step uses the facts that Pt 1 i=1 Xi is Gt 1 measure and E[Xt|Gt 1] = 0 and fourth step is obtained by recursively applying second and third steps. Using the relation in Eqn. (113) in Eqn. (112), we obtain, sup 1 t T E[| t(1) t(2)|] E[( T (1) T (2))2] 1/2 m=1 eη(T ) k γ M ( b P m k (1|1, 1) b P m k (1|1, 2)) ˆV m k 1 eη(T ) k 2 2γ2p(1 p) m=1 E ˆV m k 1 2 !1/2 eη(T ) k 2 2γ2p(1 p) Let us focus on the term involving the step sizes. We separately consider the scenario for constant step sizes and linearly rescaled step sizes. For constant step sizes, we have, eη(T ) k 2 = i=k+1 (1 ηi) k=1 η2(1 η)2(T k) η2 1 (1 η)2 η. (115) Similarly, for linearly rescaled step sizes, we have, eη(T ) k 2 = eη(T ) k 2 + i=k+1 (1 ηi) eη(T ) τ 2 + k=τ+1 η2 k(1 ηT )2(T k) η2 τ(1 ηT )2(T τ) τ + η2 τ 1 ηT (2 ηT ) 3ηT ηT T exp 4TηT 4ηT , (116) where the second step uses cη log N 1 1 γ and the fact that eη(T ) k is increasing in k in this regime. (See Eqn. (19)) and fifth step uses xe 4x/3 3/4e. On plugging results from Eqns. (115) and (116) into Eqn. (114) along with the value of p, we obtain, sup 1 t T E[| t(1) t(2)|] 8ηT 3BM(1 γ), (117) as required. B.5.5 Proof of Lemma 6 For the proof, we fix an agent m. In order to obtain the required lower bound on b V m t , we define an auxiliary sequence Q m t that evolves as described in Algorithm 5. Essentially, Q m t evolves in a manner almost identical to b Qm t except for the fact that there is only one action and hence there is no maximization step in the update rule. Algorithm 5: Evolution of Q 1: r 1, Q m 0 = Q (1, 1) for all m {1, 2, . . . , M} 2: for t = 1, 2, . . . , T do 3: for m = 1, 2, . . . , M do 4: Q m t 1/2 (1 ηt)Q m t 1(a) + ηt(1 + b P m t (1|1, 1)Q m t 1) 5: Compute Q m t according to Eqn. (8) 6: end for 7: end for It is straightforward to note that b Qm t (1) Q m t , which can be shown using induction. From the initialization, it follows that b Qm 0 (1) Q m 0 . Assuming the relation holds for t 1, we have, b Qm t 1/2(1) = (1 ηt) b Qm t 1(1) + ηt(1 + γ b P m t (1|1, 1)b V m t 1) (1 ηt) b Qm t 1(1) + ηt(1 + γ b P m t (1|1, 1) b Qm t 1(1)) (1 ηt)Q m t 1 + ηt(1 + γ b P m t (1|1, 1)Q m t 1) = Q m t 1/2. Since b Qm t and Q m t follow the same averaging schedule, it immediately follows from the above relation that b Qm t (1) Q m t . Since b V m t b Qm t (1) Q m t , we will use the sequence Q m t to establish the required lower bound on b V m t . We claim that for all time instants t and all agents m, E[Q m t ] = 1 1 γp. (118) Assuming (118) holds, we have E[(b V m t )2] E[b V m t ] 2 E[Q m t ] 2 1 1 γp 2 1 2(1 γ)2 , as required. In the above expression, the first inequality follows from Jensen s inequality, the second from the relation b V m t Q m t 0 and the third from (118). We now move now to prove the claim (118) using induction. For the base case, E[Q m 0 ] = 1 1 γp holds by choice of initialization. Assume that E[Q m t 1] = 1 1 γp holds for some t 1 for all m. If t is not an averaging instant, then for any client m, Q m t = (1 ηt)Q m t 1 + ηt(1 + γ b P m t (1|1, 1)Q m t 1) = E[Q m t ] = (1 ηt)E[Q m t 1] + ηt(1 + γE[ b P m t (1|1, 1)Q m t 1]) = (1 ηt)E[Q m t 1] + ηt(1 + γp E[Q m t 1]) 1 + γp 1 γp = 1 1 γp. (119) The third line follows from the independence of b P m t (1|1, 1) and Q m t 1 and the fourth line uses the inductive hypothesis. If t is an averaging instant, then for all clients m, Q m t = (1 ηt) Q j t 1 + ηt 1 M j=1 (1 + γ b P j t (1|1, 1)Q j t 1) = E[Q m t ] = (1 ηt) j=1 E[Q j t 1] + ηt 1 M j=1 (1 + γE[ b P j t (1|1, 1)Q j t 1]) 1 1 γp + ηt 1 M 1 + γp 1 γp = 1 1 γp, (120) where we again make use of independence and the inductive hypothesis. Thus, (119) and (120) taken together complete the inductive step. C Analysis of Fed-DVR-Q In this section, we prove Theorem 2 that outlines the performance guarantees of Fed-DVR-Q. There are two main parts of the proof. The first part deals with establishing that for the given choice of parameters described in Section 4.1.3, the output of the algorithm is an ε-optimal estimate of Q with probability 1 δ. The second part deals with deriving the bounds on the sample and communication complexity based on the choice of prescribed parameters. We begin with the second part, which is easier of the two. C.1 Establishing the sample and communication complexity bounds Establishing the communication complexity. We begin with bounding CCround. From the description of Fed-DVR-Q, it is straightforward to note that each epoch, i.e., each call to the REFINEES- TIMATE routine, involves I + 1 rounds of communication, one for estimating T Q and the remaining ones during the iterative updates of the Q-function. Since there are a total of K epochs, CCround(Fed-DVR-Q; ε, M, δ) (I + 1)K 16 η(1 γ) log2 where the second bound follows from the prescribed choice of parameters in Sec. 4.1.3. Similarly, since the quantization step is designed to compress each coordinate into J bits, each message transmitted by an agent has a size of no more than J |S||A| bits. Consequently, CCbit(Fed-DVR-Q; ε, M, δ) J |S||A| CCround(Fed-DVR-Q; ε, M, δ) η(1 γ) log2 4 M log 8KI|S||A| where once again in the second step we plugged in the choice of J from Sec. 4.1.3. Establishing the sample complexity. In order to establish the bound on the sample complexity, note that during epoch k, each agent takes a total of Lk/M + I B samples, where the first term corresponds to approximating e TL(Q(k 1)) and the second term corresponds to the samples taken during the iterative update scheme. Thus, the total sample complexity is obtained by summing up over all the K epochs. We have, SC(Fed-DVR-Q; ε, M, δ) + I B I B K + 1 k=1 Lk + K. To continue, notice that k=1 Lk 39200 M(1 γ)2 log 8KI|S||A| k=K0+1 4k K0 ! 39200 3M(1 γ)2 log 8KI|S||A| 4K0 + 4K K0 156800 3M(1 γ)2 log 8KI|S||A| 1 1 γ + 1 (1 γ)ε2 where the first line follows from the choice of Lk in Sec. 4.1.3 and the last line follows from K0 = 1 2 log2( 1 1 γ ) . Plugging this relation and the choices of I and B (cf. Sec. 4.1.3) into the previous bound yields SC(Fed-DVR-Q; ε, M, δ) 4608 ηM(1 γ)3 log2 log 8KI|S||A| + 156800 3M(1 γ)2 log 8KI|S||A| 1 1 γ + 1 (1 γ)ε2 313600 ηM(1 γ)3ε2 log2 log 8KI|S||A| Plugging in the choice of K finishes the proof. C.2 Establishing the error guarantees In this section, we show that the Q-function estimate returned by the Fed-DVR-Q algorithm is ε-optimal with probability at least 1 δ. We claim that the estimates of the Q-function generated by the algorithm across different epochs satisfy the following relation for all k K with probability 1 δ: 1 γ . (121) The required bound on Q(K) Q immediately follows by plugging in the value of K. Thus, for the remainder of the section, we focus on establishing the above claim. Step 1: fixed-point contraction of REFINEESTIMATE. Firstly, note that the variance-reduced update scheme carried out during the REFINEESTIMATE routine resembles that of the classic Qlearning scheme, i.e., fixed-point iteration, with a different operator defined as follows: H(Q) := T (Q) T (Q) + e TL(Q), for some fixed Q. (122) Thus, the update scheme at step i 1 in (11) can then be written as 2 = (1 η)Qi 1 + η b H(m) i (Qi 1), (123) where b H(m) i (Q) := b T (m) i (Q) b T (m) i (Q) + e TL(Q) is a stochastic, unbiased estimate of the operator H, similar to b T (m) i (Q). Let Q H denote the fixed point of H. Then the update scheme in (123) drives the sequence {Qm i }i 0 to Q H; further, as long as Q Q H is small, the required error Qi Q can also be controlled. The following lemmas formalize these ideas and pave the path to establish the claim in (121). The proofs are deferred to Appendix C.3. Lemma 7. Let δ (0, 1). Consider the REFINEESTIMATE routine described in Algorithm 3 and let Q H denote the fixed point of the operator H defined in (122) for some fixed Q. Then the iterates generated by REFINEESTIMATE QI satisfy 6 Q Q + Q Q H + D with probability 1 δ 2K . Lemma 8. Consider the REFINEESTIMATE routine described in Alg. 3 and let Q H denote the fixed point of the operator H defined in Eqn. (122) for a fixed Q. The following relation holds with probability 1 δ 2K : L(1 γ)3 + 2κ 2 3L(1 γ)2 + D whenever L 32κ , where κ = log 12K|S||A| Step 2: establishing the linear contraction. We now leverage the above lemmas to establish the desired contraction in (121). Instantiating the operator (122) at each k-th epoch by setting Q := Q(k 1) and L := Lk, we define Hk(Q) := T (Q) T (Q(k 1)) + e TLk(Q(k 1)), (124) whose fixed point is denoted as Q Hk. Using the results from Lemmas 7 and 8 with D := Dk and H = Hk, we obtain Q(k) Q Q(k) Q Hk + Q H Q Hk Q(k 1) Q + Q Q Hk + Dk 70 + Q Hk Q Q(k 1) Q + 7 Q Q Hk + Dk Lk(1 γ)3 + 2 Lk(1 γ)3 + 13Dk holds with probability 1 δ K . Here, we invoke Lemma 7 in the second step and Lemma 8 in the fourth step corresponding to the REFINEESTIMATE routine during the k-th epoch. In the last step, we used the fact that Lk(1 γ)2 We now use induction along with the recursive relation in (125) to establish the required claim (121). Let us first consider the case 0 k K0. The base case, Q(0) Q 1 1 γ , holds by definition. Let us assume the relation holds for k 1. Then, from (125) and choice of Lk (Sec. 4.1.3), we have Q(k) Q Q(k 1) Q Lk(1 γ)3 + 13Dk 1 6 + 2 k 7 50 19600(1 γ) + 104 420 2 (k 1) 91 39200 + 1 1 γ . (126) Now we move to the second case, for k > K0. From (125) and choice of Lk (Sec. 4.1.3), we have Q(k) Q Q(k 1) Q Lk(1 γ)3 + 13Dk 1 6 + 2 (k K0) 7 + 2 (k K0) 7 50 19600(1 γ) + 104 420 2 (k 1) 1 γ . (127) By a union bound argument, we can conclude that the relation Q(k) Q 2 k 1 γ holds for all k K with probability at least 1 δ. Step 3: confirm the compressor bound. The only thing left to verify is that the inputs to the compressor are always bounded by Dk during the k-th epoch, for all 1 k K. The following lemma provides a bound on the input to the compressor during any run of the REFINEESTIMATE routine. Lemma 9. Consider the REFINEESTIMATE routine described in Algorithm 3 with some for some fixed Q. For all i I and all agents m, the following bound holds with probability 1 δ 2K : 2 Qi 1 η Q Q H 6 (1 + γ) + 2γ + ηD(1 + γ) For the k-th epoch, it follows that η Q(k 1) Q Hk 6 (1 + γ) + 2γ + ηDk(1 + γ) Q(k 1) Q + Q Q Hk + Dk(1 + γ) 14 Q(k 1) Q + 2Dk In the third step, we used the same sequence of arguments as used in (126) and (127) and, in the fourth step, we used the bound on Q(k 1) Q from (121) and the prescribed value of Dk. C.3 Proof of auxiliary lemmas C.3.1 Proof of Lemma 7 Let us begin with analyzing the evolution of the sequence {Qi}I i=1 during a run of the REFINEESTI- MATE routine. The sequence {Qi}I i=1 satisfies the following recursion: Qi = Qi 1 + 1 m=1 C Qm i 1 2 Qi 1; D, J 2 Qi 1 + ζm i 2 + ζm i = (1 η)Qi 1 + η m=1 b H(m) i (Qi 1) + 1 m=1 ζm i | {z } =:ζi In the above expression, ζm i denotes the quantization noise introduced at agent m in the i-th update. Subtracting Q H from both sides of (128), we obtain Qi Q H = (1 η)(Qi 1 Q H) + η b H(m) i (Qi 1) Q H + ζi = (1 η)(Qi 1 Q H) + η b H(m) i (Qi 1) b H(m) i (Q H) b H(m) i (Q H) H(Q H) + ζi. (129) Consequently, Qi Q H (1 η) Qi 1 Q H + η b H(m) i (Qi 1) b H(m) i (Q H) b H(m) i (Q H) H(Q H) + ζi , which we shall proceed to bound each term separately. Regarding the second term, it follows that b H(m) i (Q) b H(m) i (Q H) = b T (m) i (Q) b T (m) i (Q H) γ Q Q H , (131) which holds for all Q since b T (m) i is a γ-contractive operator. Regarding the third term, notice that b H(m) i (Q H) H(Q H) = 1 MB Tz(Q H) Tz(Q) T (Q H) + T (Q) . Note that Tz(Q H) Tz(Q) T (Q H) + T (Q) is a zero-mean random vector satisfying Tz(Q H) Tz(Q) T (Q H) + T (Q) 2γ Q Q H . (132) Thus, each of its coordinate is a (2γ Q Q H )2-sub-Gaussian vector. Applying the tail bounds for a maximum of sub-Gaussian random variables [Vershynin, 2018], we obtain that 1 M b H(m) i (Q H) H(Q H) 2γ Q Q H 2 MB log 8KI|S||A| holds with probability at least 1 δ 4KI . Turning to the last term, by the construction of the compression routine described in Section 4.1.2, it is straightforward to note that ζm i is a zero-mean random vector whose coordinates are independent, D2 4 J-sub-Gaussian random variables. Thus, ζi is also a zero-mean random vector whose coordinates are independent, D2 M 4J -sub-Gaussian random variables. Hence, we can similarly conclude that 2 M log 8KI|S||A| holds with probability at least 1 δ 4KI . Combining the above bounds into (130), and introducing the short-hand notation κ := log 8KI|S||A| δ , we obtain with probability at least 1 δ 2KI , Qi Q H (1 η(1 γ)) Qi 1 Q H + 2ηγ Q Q H 2κ MB + D 2 J Unrolling the above recursion over i = 1, . . . , I yields the following relation, which holds with probability at least 1 δ 2K : QI Q H (1 η(1 γ))I Q0 Q H + B Q Q H + D 2 J i=1 (1 η(1 γ))I i (1 η(1 γ))I Q Q H + 1 η(1 γ) B Q Q H + D 2 J (1 η(1 γ))I + 2γ (1 γ) Q Q H 6 + D 6 Q Q + Q Q H + D Here, the fourth step is obtained by plugging in the prescribed values of B, I and J in Sec. 4.1.3. C.3.2 Proof of Lemma 8 Intuitively, the error Q H Q depends on the error term e TL(Q) T (Q). If the latter is small, then H(Q) is close to T (Q) and consequently so are Q H and Q . Thus, we begin with bounding the term e TL(Q) T (Q). We have, e TL(Q) T (Q) m=1 C e T (m) L (Q) Q T (Q) e T (m) L (Q) + ζ(m) L T (Q) e T (m) L (Q) e T (m) L (Q ) T (Q) + T (Q ) + 1 m=1 ζ(m) L + 1 e T (m) L (Q ) T (Q ) , where once again ζ(m) L := e T (m) L (Q) Q C e T (m) L (Q) Q denotes the quantization error at agent m. Similar to the arguments of (133) and (134), we can conclude that each of the following relations hold with probability at least 1 δ 6K : 1 M e T (m) L (Q) e T (m) L (Q ) T (Q) + T (Q ) 2γ Q Q 2 L log 12K|S||A| 2 M log 12K|S||A| For the third term, we can rewrite it as e T (m) L (Q ) T (Q ) = 1 M L/M TZ(m) l (Q ) T (Q ) . We will use Bernstein inequality element wise to bound the above term. Let σ R|S| |A| be such that [σ (s, a)]2 = Var(TZ(Q )(s, a)), i.e., (s, a)-th element of σ denotes the standard deviation of the random variable TZ(Q )(s, a). Since TZ(Q ) T (Q ) 1 1 γ a.s., Bernstein inequality gives us that 1 M e T (m) L (Q )(s, a) T (Q )(s, a) σ (s, a) 2 L log 6K|S||A| + 2 3L(1 γ) log 6K|S||A| holds simultaneously for all (s, a) S A with probability at least 1 δ 6K . On combining (137), (138), (139) and (140), we obtain that e TL(Q)(s, a) T (Q)(s, a) = Q Q L + σ (s, a) 3L(1 γ) + D 2 J holds simultaneously for all (s, a) S A with probability at least 1 δ 2K , where κ = log 12K|S||A| δ . We use this bound in (141) to obtain a bound on Q H Q using the following lemma. Lemma 10 (Wainwright [2019b]). Let π and π H respectively denote the optimal policies w.r.t. Q and Q H. Then, Q H Q max n (I γP π ) 1 e TL(Q) T (Q) , (I γP π H) 1 e TL(Q) T (Q) o . Here, for any deterministic policy π, P π R|S||A| |S||A| is given by (P πQ)(s, a) = P s S P(s |s, a)Q(s , π(s )). Furthermore, it was shown in Wainwright [2019b, Proof of Lemma 4] that if the error |e TL(Q)(s, a) T (Q)(s, a)| satisfies e TL(Q)(s, a) T (Q)(s, a) z0 Q Q + z1σ (s, a) + z2 (142) for some z0, z1, z2 0 with z1 < 1, then the bound in Lemma 10 can be simplified to Q H Q 1 1 z1 z0 1 γ Q Q + z1 (1 γ)3/2 + z2 1 γ On comparing, (141) with (142), we obtain 3L(1 γ) + D 2 J Moreover, the condition L 32κ implies that z1 < 1 and 1 1 z1 2. Thus, on plugging in the above values in (143), we can conclude that L(1 γ)3 + 2κ 2 3L(1 γ)2 + D 2 J L(1 γ)3 + 2 3L(1 γ)2 + D where once again we use the value of J in the last step. C.3.3 Proof of Lemma 9 From the iterative update rule in (123), for any agent m we have, 2 Qi 1 = η( b H(m) i 1(Qi 1) Qi 1) = η( b H(m) i 1(Qi 1) b H(m) i 1(Q H) + b H(m) i 1(Q H) H(Q H) + Q H Qi 1). 2 Qi 1 η b H(m) i 1(Qi 1) b H(m) i 1(Q H) + b H(m) i 1(Q H) H(Q H) + Q H Qi 1 η γ Qi 1 Q H + 2γ Q Q H + Q H Qi 1 = η (1 + γ) Qi 1 Q H + 2γ Q Q H 6 (1 + γ) + 2γ + ηD(1 + γ) holds with probability 1 δ 2KI . Here, the second inequality follows from (131) and (132), The last step in the above relation follows from (135) evaluated at a general value of i and the prescribed value of J. By a union bound argument, the above relation holds for all i with probability at least 1 δ 2K . D Numerical Experiments In this section, we corroborate our theoretical results through simulations. For the simulations, we consider an MDP with 3 states and two actions, i.e., S = {0, 1, 2} and A = {0, 1}. The discount parameter is set to γ = 0.9. The reward and transition kernel of the MDP is based on the hard instance constructed in Appendix B. Specifically, the reward and transition kernel of state 0 is given by the expression in Eqn. (14a). Similarly, the reward and transition kernel corresponding to state 1 and 2 are identical and given by Eqns. (14b) and (14c) with p = 0.8. Error vs Samples Taken Fed-DVR-Q Fed-Syn-Q (a) Sample Complexity 0.0 0.2 0.4 0.6 0.8 1.0 Communication Cost vs Samples Taken Fed-DVR-Q Fed-Syn-Q (b) Communication Complexity Figure 1: Comparison between sample and communication complexities of Fed-DVR-Q and the algorithm Fed-Syn Q from Woo et al. [2023]. We perform three empirical studies. In the first study, we compare the proposed algorithm Fed-DVRQ to the Fed-Syn Q algorithm proposed in Woo et al. [2023]. We consider a Federated Q-learning setting with 5 agents. The parameters for both the algorithms were set to the suggested values in the respective papers. Both the algorithms were run with 107 samples at each agent. For the communication cost of Fed-Syn Q we assume that each real number is expressed using 32 bits. In Fig 1a, we plot the error rate of the algorithm as a function of the number of samples used. In Fig. 1b we plot the corresponding communication complexities. As evident from Fig 1a, Fed-DVR-Q achieves a smaller error than Fed-Syn Q under the same sample budget. Similarly, as suggested by Fig. 1b, Fed-DVR-Q also requires much less communication (measured in terms of the number of bits transmitted) than Fed-Syn Q, demonstrating the effectiveness of the proposed approach and corroborating our theoretical results. In the second study, we examine the effect of the number of agents on the sample and communication complexity of Fed-DVR-Q. We vary the number of agents from 5 to 25 in multiples of 5 and record the sample and communication complexity to achieve an error rate of ε = 0.03. The sample 5 10 15 20 25 107 Sample Complexity vs Number of Agents (a) Sample Complexity 5 10 15 20 25 Communication Complexity vs Number of Agents (b) Communication Complexity Figure 2: Dependence of sample and communication complexities of Fed-DVR-Q on the number of agents. 3 4 5 6 7 8 9 10 Communication Complexity vs Effective horizon Figure 3: Communication complexity of Fed-DVR-Q as a function of effective horizon, i.e., 1 1 γ . and communication complexities as a function of number of agents are plotted in Figs. 2a and 2b respectively. The sample complexity decreases as 1/M while the communication complexity is independent of the number of agents. This corroborates the linear speedup phenomenon suggested by our theoretical results and the independence between communication complexity and the number of agents. In the last study, we compare the communication complexity of Fed-DVR-Q as function of the discount parameter γ. We consider the same setup as in the first study and vary the values of γ from 0.7 to 0.9 in steps of 0.05. We run the algorithm to achieve an accuracy of ε = 0.1 with parameter choices prescribed in Sec. 4.1.3. We plot the communication cost of Fed-DVR-Q against the effective horizon, i.e., 1 1 γ in Fig. 3. As evident from the figure, the communication scales linearly with the effective horizon, which matches the theoretical claim in Theorem 2. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: In the abstract and introduction, we describe that we study the samplecommunication complexity trade-off in Federated Q-learning and derive both converse and achievability results. In Sec. 3 we derive the lower bound on communication complexity and in Sec. 4 we outline the algorithm that matches the lower bound derived earlier. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We consider an infinite horizon MDP in the tabular setting and derive the results for the class of intermittent communication algorithms. We acknowledge that these assumptions might be restrictive for a certain class of applications and extension to more general settings is discussed as a future direction in Sec. 5. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: Both Theorem 1 and 2 clearly state all assumptions used in the statement of main result. The proofs for both the theorems can be found in the appendix. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We have a section with numerical experiments in Appendix D. The section contains all relevant details of our implementation to reproduce the results. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [NA] Justification: The paper does not have associated code or data. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: The relevant details can be found in Appendix D. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [NA] Justification: The error bars associated with the plots are small and hence we omit them. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: The empirical studies require no specific compute resources can be easily completed on a regular laptop. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: We have read the Neur IPS Code of Ethics and the paper conforms to the Neur IPS Code of Ethics. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: The paper is concerned with foundational research and is theoretical in nature with no direct societal impact. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper is theoretical is nature and does not involve release of data or code and hence poses no such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: The paper does not use any existing assets. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not release any new assets. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve any crowdsourcing. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.