# adaptive_federated_learning_with_autotuned_clients__a6b7e291.pdf Published as a conference paper at ICLR 2024 ADAPTIVE FEDERATED LEARNING WITH AUTO-TUNED CLIENTS Junhyung Lyle Kim Mohammad Taha Toghani C esar A. Uribe & Anastasios Kyrillidis Department of Computer Science, Department of Electrical and Computer Engineering Rice University, Houston, TX 77005, USA {jlylekim, mttoghani, cauribe, anastasios}@rice.edu Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data. While being a flexible framework, where the distribution of local data, participation rate, and computing power of each client can greatly vary, such flexibility gives rise to many new challenges, especially in the hyperparameter tuning on the client side. We propose -SGD, a simple step size rule for SGD that enables each client to use its own step size by adapting to the local smoothness of the function each client is optimizing. We provide theoretical and empirical results where the benefit of the client adaptivity is shown in various FL scenarios. 1 INTRODUCTION Federated Learning (FL) is a machine learning framework that enables multiple clients to collaboratively learn a global model in a distributed manner. Each client trains the model on their local data, and then sends only the updated model parameters to a central server for aggregation. Mathematically, FL aims to solve the following optimization problem: min x Rd f(x) := 1 i=1 fi(x), (1) where fi(x) := Ez Di[Fi(x, z)] is the loss function of the i-th client, and m is the number of clients. A key property of FL its flexibility in how various clients participate in the overall training procedure. The number of clients, their participation rates, and computing power available to each client can vary and change at any time during the training. Additionally, the local data of each client is not shared with others, resulting in better data privacy (Mc Mahan et al., 2017; Agarwal et al., 2018). While advantageous, such flexibility also introduces a plethora of new challenges, notably: i) how the server aggregates the local information coming from each client, and ii) how to make sure each client meaningfully learns using their local data and computing device. The first challenge was partially addressed in Reddi et al. (2021), where adaptive optimization methods such as Adam (Kingma & Ba, 2014) was utilized in the aggregation step. Yet, the second challenge remains largely unaddressed. Local data of each client is not shared, which intrinsically introduces heterogeneity in terms of the size and the distribution of local datasets. That is, Di differs for each client i, as well as the number of samples z Di. Consequently, fi(x) can vastly differ from client to client, making the problem in (1) hard to optimize. Moreover, the sheer amount of local updates is far larger than the number of aggregation steps, due to the high communication cost typically 3-4 orders of magnitude more expensive than local computation in distributed settings (Lan et al., 2020). As a result, extensive fine-tuning of the client optimizers is often required to achieve good performance in FL scenarios. For instance, experimental results of the well-known Fed Avg algorithm were obtained after performing a grid-search of typically 11-13 step sizes of the clients SGD (Mc Mahan et al., 2017, Section 3), as SGD (and its variants) are highly sensitive to the step size (Toulis & Airoldi, 2017; Assran & Rabbat, 2020; Kim et al., 2022b). Similarly, in Reddi et al. (2021), 6 different client Published as a conference paper at ICLR 2024 Figure 1: Illustration of the effect of not properly tuning the client step sizes. In (A), each client optimizer uses the best step size from grid-search. Then, the same step size from (A) is intentionally used in settings (B) and (C). Only -SGD works well across all settings without additional tuning. step sizes were grid-searched for different tasks, and not surprisingly, each task requires a different client step size to obtain the best result, regardless of the server-side adaptivity (Reddi et al., 2021, Table 8). Importantly, even these fine-tunings are done under the setting that all clients use the same step size, which is sub-optimal, given that fi can be vastly different per client; we analyze this further in Section 2. Initial examination as motivation. The implication of not properly tuning the client optimizer is highlighted in Figure 1. We plot the progress of test accuracies for different client optimizers, where, for all the other test cases, we intentionally use the same step size rules that were fine-tuned for the task in Figure 1(A). There, we train a Res Net-18 for CIFAR-10 dataset classification within an FL setting, where the best step sizes for each (client) optimizer was used after grid-search; we defer the experimental details to Section 4. Hence, all methods perform reasonably well, although -SGD, our proposed method, achieves noticeably better test accuracy when compared to non-adaptive SGD variants e.g., a 5% difference in final classification accuracy from SGDM and comparable final accuracy only with adaptive SGD variants, like Adam and Adagrad. In Figure 1(B), we train a shallow CNN for MNIST classification, using the same step sizes from (A). MNIST classification is now considered an easy task, and therefore SGD with the same constant and decaying step sizes from Figure 1(A) works well. However, with momentum, SGDM exhibits highly oscillating behavior, which results in slow progress and poor final accuracy, especially without decaying the step size. Adaptive optimizers, like Adam and Adagrad, show similar behavior, falling short in achieving good final accuracy, compared to their performance in the case of Figure 1(A). Similarly, in Figure 1(C), we plot the test accuracy progress for CIFAR-100 classification trained on a Res Net-50, again using the same step size rules as before. Contrary to Figure 1(B), SGD with momentum (SGDM) works better than SGD, both with the constant and the decaying step sizes. Adam becomes a good optimizer again, but its sibling, Adagrad, performs worse than SGDM. On the contrary, our proposed method, -SGD, which we introduce in Section 3, achieves superior performance in all cases without any additional tuning. The above empirical observations beg answers to important and non-trivial questions in training FL tasks using variants of SGD methods as the client optimizer: Should the momentum be used? Should the step size be decayed? If so, when? Unfortunately, Figure 1 indicates that the answers to these questions highly vary depending on the setting; once the dataset itself or how the dataset is distributed among different clients changes, or once the model architecture changes, the client optimizers have to be properly re-tuned to ensure good performance. Perhaps surprisingly, the same holds for adaptive methods like Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014). Our hypothesis and contributions. Our paper takes a stab in this direction: we propose DELTASGD (Distribut Ed Locali Ty Adaptive SGD), a simple adaptive distributed SGD scheme, that can automatically tune its step size based on the available local data. We will refer to our algorithm as -SGD in the rest of the text. Our contributions can be summarized as follows: We propose -SGD, which has two implications in FL settings: i) each client can use its own step size, and ii) each client s step size adapts to the local smoothness of fi hence Locali Ty Adaptive and can even increase during local iterations. Moreover, due to the simplicity of the proposed step Published as a conference paper at ICLR 2024 size, -SGD is agnostic to the loss function and the server optimizer, and thus can be combined with methods that use different loss functions such as Fed Prox (Li et al., 2020) or MOON (Li et al., 2021), or adaptive server methods such as Fed Adam (Reddi et al., 2021). We provide convergence analysis of -SGD in a general nonconvex setting (Theorem 1). We also prove convergence in convex setting; due to space constraints, we only state the result of the general nonconvex case in Theorem 1, and defer other theorems and proofs to Appendix A. We evaluate our approach on several benchmark datasets and demonstrate that -SGD achieves superior performance compared to other state-of-the-art FL methods. Our experiments show that -SGD can effectively adapt the client step size to the underlying local data distribution, and achieve convergence of the global model without any additional tuning. Our approach can help overcome the client step size tuning challenge in FL and enable more efficient and effective collaborative learning in distributed systems. 2 PRELIMINARIES AND RELATED WORK A bit of background on optimization theory. Arguably the most fundamental optimization algorithm for minimizing a function f(x) : Rd R is the gradient descent (GD), which iterates with the step size ηt as: xt+1 = xt ηt f(xt). Under the assumption that f is globally L-smooth, that is: f(x) f(y) L x y x, y Rd, (2) the step size ηt = 1 L is the optimal step size for GD, which converges for convex f at the rate: f(xt+1) f(x ) L x0 x 2 2(2t + 1) . (3) Limitations of popular step sizes. In practice, however, the constants like L are rarely known (Boyd et al., 2004). As such, there has been a plethora of efforts in the optimization community to develop a universal step size rule or method that does not require a priori knowledge of problem constants so that an optimization algorithm can work as plug-and-play (Nesterov, 2015; Orabona & P al, 2016; Levy et al., 2018). A major lines of work on this direction include: i) line-search (Armijo, 1966; Paquette & Scheinberg, 2020), ii) adaptive learning rate (e.g., Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014)), and iii) Polyak step size (Polyak, 1969), to name a few. Unfortunately, the pursuit of finding the ideal step size is still active (Loizou et al., 2021; Defazio & Mishchenko, 2023; Bernstein et al., 2023), as the aforementioned approaches have limitations. For instance, to use line search, solving a sub-routine is required, inevitably incurring additional evaluations of the function and/or the gradient. To use methods like Adagrad and Adam, the knowledge of D, the distance from the initial point to the solution set, is required to ensure good performance (Defazio & Mishchenko, 2023). Similarly, Polyak step size requires knowledge of f(x ) (Hazan & Kakade, 2019), which is not always known a priori, and is unclear what value to use for an arbitrary loss function and model; see also Section 4 where not properly estimating D or f(x ) resulting in suboptimal performance of relevant client optimizers in FL scenarios. Implications in distributed/FL scenarios. The global L-smoothness in (2) needs to be satisfied for all x and y, implying even finite L can be arbitrarily large. This results in a small step size, leading to slow convergence, as seen in (3). Malitsky & Mishchenko (2020) attempted to resolve this issue by proposing a step size for (centralized) GD that depends on the local smoothness of f, which by definition is smaller than the global smoothness constant L in (2). -SGD is inspired from this work. Naturally, the challenge is aggravated in the distributed case. To solve (1) under the assumption that each fi is Li-smooth, the step size of the form 1/Lmax where Lmax := maxi Li is often used (Yuan et al., 2016; Scaman et al., 2017; Qu & Li, 2019), to ensure convergence (Uribe et al., 2020). Yet, one can easily imagine a situation where Li Lj for i = j, in which case the convergence of fj(x) can be arbitrarily slow by using step size of the form 1/Lmax. Therefore, to ensure that each client learns useful information in FL settings as in (1), ideally: i) each agent should be able to use its own step size, instead of crude ones like 1/Lmax for all agents; ii) the individual step size should be locally adaptive to the function fi, (i.e., even 1/Li can be too crude, analogously to the centralized GD); and iii) the step size should not depend on problem constants Published as a conference paper at ICLR 2024 like Li or µi, which are often unknown. In Section 3, we introduce Distribut Ed Locali Ty Adaptive SGD ( -SGD), which satisfies all of the aforementioned desiderata. Related work on FL. The FL literature is vast; here, we focus on the results that closely relate to our work. The Fed Avg algorithm (Mc Mahan et al., 2017) is one of the simplest FL algorithms, which averages client parameters after some local iterations. Reddi et al. (2021) showed that Fed Avg is a special case of a meta-algorithm, where both the clients and the server use SGD optimizer, with the server learning rate being 1. To handle data heterogeneity and model drift, they proposed using adaptive optimizers at the server. This results in algorithms such as Fed Adam and Fed Yogi, which respectively use the Adam (Kingma & Ba, 2014) and Yogi (Zaheer et al., 2018) as server optimizers. Our approach is orthogonal and complimentary to this, as we propose a client adaptive optimizer that can be easily combined with these server-side aggregation methods (c.f., Appendix B.4). Other approaches have handled the heterogeneity by changing the loss function. Fed Prox (Li et al., 2020) adds an ℓ2-norm proximal term to to handle data heterogeneity. Similarly, MOON (Li et al., 2021) uses the model-contrastive loss between the current and previous models. Again, our proposed method can be seamlessly combined with these approaches (c.f., Table 2b, Appendix B.6 and B.5). The closest related work to ours is a concurrent work by Mukherjee et al. (2023). There, authors utilize the Stochastic Polyak Stepsize (Loizou et al., 2021) in FL settings. We do include SPS results in Section 4. There are some previous works that considered client adaptivity, including Ada Alter (Xie et al., 2019a) and Wang et al. (2021). Both works utilize an adaptive client optimizer similar to Adagrad. However, Ada Alter incurs twice the memory compared to Fed Avg, as Ada Alter requires communicating both the model parameters as well as the client accumulators ; similarly, Wang et al. (2021) requires complicated local and global correction steps to achieve good performance. In short, previous works that utilize client adaptivity require modification in server-side aggregation; without such heuristics, Adagrad (Duchi et al., 2011) exhibits suboptimal performance, as seen in Table 1. Lastly, another line of works attempts to handle heterogeneity by utilizing control-variates (Praneeth Karimireddy et al., 2019; Liang et al., 2019; Karimireddy et al., 2020), a similar idea to variance reduction from single-objective optimization (Johnson & Zhang, 2013; Gower et al., 2020). While theoretically appealing, these methods either require periodic full gradient computation or are not usable under partial client participation, as all the clients must maintain a state throughout all rounds. 3 DELTA( )-SGD: DISTRIBUTED LOCALITY ADAPTIVE SGD We now introduce -SGD. In its simplest form, client i at communication round t uses the step size: ηi t = min n xi t xi t 1 2 fi(xi t) fi(xi t 1) , q 1 + θi t 1ηi t 1 o , θi t 1 = ηi t 1/ηi t 2. (4) The first part of min{ , } approximates the (inverse of) local smoothness of fi,1 and the second part controls how fast ηi t can increase. Indeed, -SGD with (4) enjoys the following decrease in the Lyapunov function: xt+1 x 2 + 1 i=1 xi t+1 xi t 2 + 2 h ηi t+1θi t+1 fi(xi t) fi(x ) i i=1 xi t xi t 1 2 + 2 h ηi tθi t fi(xi t 1) fi(x ) i , (5) when fi s are assumed to be convex (c.f., Theorem 5 in Appendix A). For the FL settings, we extend (4) by including stochasticity and local iterations, as summarized in Algorithm 1 and visualized in Figure 7. For brevity, we use fi(x) = 1 |B| P z B Fi(x, z) to denote the stochastic gradients with batch size |B| = b. 1Notice that fi(xi t) fi(xi t 1) 1 2ηi t xi t xi t 1 Li,t xi t xi t 1 . Published as a conference paper at ICLR 2024 Algorithm 1 DELTA( )-SGD: Distribut Ed Locali Ty Adaptive SGD 1: input: x0 Rd, η0, θ0, γ > 0, and p (0, 1). 2: for each round t = 0, 1, . . . , T 1 do 3: sample a subset St of clients with size |St| = p m 4: for each machine in parallel for i St do 5: set xi t,0 = xt 6: set ηi t,0 = η0 and θi t,0 = θ0 7: for local step k [K] do 8: xi t,k = xi t,k 1 ηi t,k 1 fi(xi t,k 1) update local parameter with -SGD 9: ηi t,k = min n γ xi t,k xi t,k 1 2 fi(xi t,k) fi(xi t,k 1) , q 1 + θi t,k 1ηi t,k 1 o 10: θi t,k = ηi t,k/ηi t,k 1 (line 9 & 10) update locality adaptive step size 11: end for 12: end for 13: xt+1 = 1 |St| P i St xi t,K server-side aggregation 14: end for 15: return x T We make a few remarks of Algorithm 1. First, the input θ0 > 0 can be quite arbitrary, as it can be corrected, per client level, in the first local iteration (line 10); similarly for η0 > 0, although η0 should be sufficiently small to prevent divergence in the first local step. Second, we include the amplifier γ to the first condition of step size (line 9), but this is only needed for Theorem 1.2 Last, fi(xi t,k 1) shows up twice: in updating xi t,k (line 8) and ηi t,k (line 9). Thus, one can use the same or different batches; we use the same batches in experiments to prevent additional gradient evaluations. 3.1 CONVERGENCE ANALYSIS Technical challenges. The main difference of analyzing Algorithm 1 compared to other decentralized optimization algorithms is that the step size ηi t,k not only depends on the round t and local steps k, but also on i, the client. To deal with the client-dependent step size, we require a slightly non-standard assumption on the dissimilarity between f and fi, as detailed below. Assumption 1. There exist nonnegative constants σ, ρ, and G such that for all i [M] and x Rd, E Fi(x, z) fi(x) 2 σ2, (bounded variance) (1a) fi(x) G, (bounded gradient) (1b) fi(x) f(x) 2 ρ f(x) 2. (strong growth of dissimilarity) (1c) Assumption 1a has been standard in stochastic optimization literature (Ghadimi & Lan, 2013; Stich, 2018; Khaled et al., 2020); recently, this assumption has been relaxed to weaker ones such as expected smoothness (Gower et al., 2021), but this is out of scope of this work. Assumption 1b is fairly standard in nonconvex optimization (Zaheer et al., 2018; Ward et al., 2020), and often used in FL setting (Xie et al., 2019a;b; Reddi et al., 2021). Assumption 1c is reminiscent of the strong growth assumption in stochastic optimization (Schmidt & Roux, 2013), which is still used in recent works (Cevher & V u, 2019; Vaswani et al., 2019a;b). To the best of our knowledge, this is the first theoretical analysis of the FL setting where clients can use their own step size. We now present the main theorem. Theorem 1. Let Assumption 1 hold, with ρ = O(1). Further, suppose that γ = O( 1 K T ), and η0 = O(γ). Then, the following property holds for Algorithm 1, for T sufficiently large: t=0 E f (xt) 2 O Ψ1 2For all our experiments, we use the default value γ = 2 from the original implementation in https: //github.com/ymalitsky/adaptive_GD/blob/master/pytorch/optimizer.py. Published as a conference paper at ICLR 2024 where Ψ1 = max n σ2 b , f(x0) f(x ) o and Ψ2 = σ2 b + G2 are global constants, with b = |B| being the batch size; L is a constant at most the maximum of local smoothness, i.e., maxi,t Li,t, where Li,t the local smoothness of fi at round t. The convergence result in Theorem 1 implies a sublinear convergence to an ε-first order stationary point with at least T = O(ε 2) communication rounds. We remind again that the conditions on γ and η0 are only required for our theory.3 Importantly, we did not assume fi is globally L-smooth, a standard assumption in nonconvex optimization and FL literature (Ward et al., 2020; Koloskova et al., 2020; Li et al., 2020; Reddi et al., 2021); instead, we can obtain a smaller quantity, L, through our analysis; for space limitation, we defer the details and the proof to Appendix A. 4 EXPERIMENTAL SETUP AND RESULTS We now introduce the experimental setup and discuss the results. Our implementation can be found in https://github.com/jlylekim/auto-tuned-FL. Datasets and models. We consider image classification and text classification tasks. We use four datasets for image classification: MNIST, FMNIST, CIFAR-10, and CIFAR-100 (Krizhevsky et al., 2009). For MNIST and FMNIST, we train a shallow CNN with two convolutional and two fullyconnected layers, followed by dropout and Re LU activations. For CIFAR-10, we train a Res Net-18 (He et al., 2016). For CIFAR-100, we train both Res Net-18 and Res Net-50 to study the effect of changing the model architecture. For text classification, we use two datasets: DBpedia and AGnews datasets (Zhang et al., 2015), and train a Distill BERT (Sanh et al., 2019) for classification. For each dataset, we create a federated version by randomly partitioning the training data among 100 clients (50 for text classification), with each client getting 500 examples. To control the level of non-iidness, we apply latent Dirichlet allocation (LDA) over the labels following Hsu et al. (2019), where the degree class heterogeneity can be parameterized by the Dirichlet concentration parameter α. The class distribution of different α s is visualized in Figure 8 in Appendix B.9. FL setup and optimizers. We fix the number of clients to be 100 for the image classification, and 50 for the text classification; we randomly sample 10% as participating clients. We perform E local epochs of training over each client s dataset. We utilize mini-batch gradients of size b = 64 for the image classification, and b = 16 for the text classification, leading to K E 500 b local gradient steps; we use E = 1 for all settings. For client optimizers, we compare stochastic gradient descent (SGD), SGD with momentum (SGDM), adaptive methods including Adam (Kingma & Ba, 2014), Adagrad (Duchi et al., 2011), SGD with stochastic Polyak step size (SPS) (Loizou et al., 2021), and our proposed method: -SGD in Algorithm 1. As our focus is on client adaptivity, we mostly present the results using Fed Avg (Mc Mahan et al., 2017) as the server optimizer; additional results using Fed Adam can be found in Appendix B.4. Hyperparameters. For each optimizer, we perform a grid search of learning rates on a single task: CIFAR-10 classification trained with a Res Net-18, with Dirichlet concentration parameter α = 0.1; for the rest of the settings, we use the same learning rates. For SGD, we perform a grid search with η {0.01, 0.05, 0.1, 0.5}. For SGDM, we use the same grid for η and use momentum parameter β = 0.9. To properly account for the SGD(M) fine-tuning typically done in practice, we also test dividing the step size by 10 after 50%, and again by 10 after 75% of the total training rounds (LR decay). For Adam and Adagrad, we grid search with η {0.001, 0.01, 0.1}. For SPS, we use the default setting of the official implementation.4 For -SGD, we append δ in front of the second condition: q 1 + δθi t,k 1ηi t,k 1 following Malitsky & Mishchenko (2020), and use δ = 0.1 for all experiments.5 Finally, for the number of rounds T, we use 500 for MNIST, 1000 for FMNIST, 2000 for CIFAR-10 and CIFAR-100, and 100 for the text classification tasks. 3In all experiments, we use the default settings γ = 2, η0 = 0.2, and θ0 = 1 without additional tuning. 4I.e., we use f i = 0, and c = 0.5, for the SPS step size: fi(x) f i c fi(x) 2 . The official implementation can be found in https://github.com/Issam Laradji/sps. 5We also demonstrate in Appendix B.1 that δ has very little impact on the final accuracy of -SGD. Published as a conference paper at ICLR 2024 4.1 RESULTS We clarify again that the best step size for each client optimizer was tuned via grid-search for a single task: CIFAR-10 classification trained with a Res Net-18, with Dirichlet concentration parameter α = 0.1. We then intentionally use the same step size in all other tasks to highlight two points: i) -SGD works well without any tuning across different datasets, model architectures, and degrees of heterogeneity; ii) other optimizers perform suboptimally without additional tuning. Non-iidness Optimizer Dataset / Model Dir(α p) MNIST FMNIST CIFAR-10 CIFAR-100 CIFAR-100 CNN CNN Res Net-18 Res Net-18 Res Net-50 SGD 98.3 (0.2) 86.5 (0.8) 87.7 (2.1) 57.7 (4.2) 53.0 (12.8) SGD ( ) 97.8 (0.7) 86.3 (1.0) 87.8 (2.0) 61.9 (0.0) 60.9 (4.9) SGDM 98.5 (0.0) 85.2 (2.1) 88.7 (1.1) 58.8 (3.1) 60.5 (5.3) SGDM ( ) 98.4 (0.1) 87.2 (0.1) 89.3 (0.5) 61.4 (0.5) 63.3 (2.5) Adam 94.7 (3.8) 71.8 (15.5) 89.4 (0.4) 55.6 (6.3) 61.4 (4.4) Adagrad 64.3 (34.2) 45.5 (41.8) 86.6 (3.2) 53.5 (8.4) 51.9 (13.9) SPS 10.1 (88.4) 85.9 (1.4) 82.7 (7.1) 1.0 (60.9) 50.0 (15.8) -SGD 98.4 (0.1) 87.3 (0.0) 89.8 (0.0) 61.5 (0.4) 65.8 (0.0) SGD 98.1 (0.0) 83.6 (2.8) 72.1 (12.9) 54.4 (6.7) 44.2 (19.9) SGD ( ) 98.0 (0.1) 84.7 (1.7) 78.4 (6.6) 59.3 (1.8) 48.7 (15.4) SGDM 97.6 (0.5) 83.6 (2.8) 79.6 (5.4) 58.8 (2.3) 52.3 (11.8) SGDM ( ) 98.0 (0.1) 86.1 (0.3) 77.9 (7.1) 60.4 (0.7) 52.8 (11.3) Adam 96.4 (1.7) 80.4 (6.0) 85.0 (0.0) 55.4 (5.7) 58.2 (5.9) Adagrad 89.9 (8.2) 46.3 (40.1) 84.1 (0.9) 49.6 (11.5) 48.0 (16.1) SPS 96.0 (2.1) 85.0 (1.4) 70.3 (14.7) 42.2 (18.9) 42.2 (21.9) -SGD 98.1 (0.0) 86.4 (0.0) 84.5 (0.5) 61.1 (0.0) 64.1 (0.0) SGD 96.8 (0.7) 79.0 (1.2) 22.6 (11.3) 30.5 (1.3) 24.3 (7.1) SGD ( ) 97.2 (0.3) 79.3 (0.9) 33.9 (0.0) 30.3 (1.5) 24.6 (6.8) SGDM 77.9 (19.6) 75.7 (4.5) 28.4 (5.5) 24.8 (7.0) 22.0 (9.4) SGDM ( ) 94.0 (3.5) 79.5 (0.7) 29.0 (4.9) 20.9 (10.9) 14.7 (16.7) Adam 80.8 (16.7) 60.6 (19.6) 22.1 (11.8) 18.2 (13.6) 22.6 (8.8) Adagrad 72.4 (25.1) 45.9 (34.3) 12.5 (21.4) 25.8 (6.0) 22.2 (9.2) SPS 69.7 (27.8) 44.0 (36.2) 21.5 (12.4) 22.0 (9.8) 17.4 (14.0) -SGD 97.5 (0.0) 80.2 (0.0) 31.6 (2.3) 31.8 (0.0) 31.4 (0.0) Table 1: Experimental results based on the settings detailed in Section 4. Best accuracy ( 0.5%) for each task are shown in bold. Subscripts (x.x) is the performance difference from the best result, and is highlighted in pink when it is bigger than 2%. The down-arrow symbol ( ) indicates step-wise learning rate decay, where the step sizes are divided by 10 after 50%, and another by 10 after 75% of the total rounds. Figure 2: The effect of stronger heterogeneity on different client optimizers, induced by the Dirichlet concentration parameter α {0.01, 0.1, 1}. -SGD remains robust performance in all cases, whereas other methods show significant performance degradation when changing the level of heterogeneity α, or when changing the setting (model/architecture). Published as a conference paper at ICLR 2024 Changing the level of non-iidness. We first investigate how the performance of different client optimizers degrade in increasing degrees of heterogeneity by varying the concentration parameter α {1, 0.1, 0.01} multiplied to the prior of Dirichlet distribution, following Hsu et al. (2019). Three illustrative cases are visualized in Figure 2. We remind that the step sizes are tuned for the task in Figure 2(A). For this task, with α = 1 (i.e., closer to iid), all methods perform better than α = 0.1, as expected. With α = 0.01, which is highly non-iid (c.f., Figure 8 in Appendix B.9), we see a significant drop in performance for all methods. SGD with LR decay and -SGD perform the best, while adaptive methods like Adam (85% 22.1%) and Adagrad (84.1% 12.5%) degrade noticeably more than other methods. Figure 2(B) shows the results of training FMNIST with a CNN. This problem is generally considered easier than CIFAR-10 classification. The experiments show that the performance degradation with varying α s is much milder in this case. Interestingly, adaptive methods such as Adam, Adagrad, and SPS perform much worse than other methods Lastly, in Figure 2(C), results for CIFAR-100 classification trained with a Res Net-50 are illustrated. -SGD exhibits superior performance in all cases of α. Unlike MNIST and FMNIST, Adam enjoys the second (α = 0.1) or the third (α = 1) best performance in this task, complicating how one should tune Adam for the task at hand. Other methods, including SGD with and without momentum/LR decay, Adagrad, and SPS, perform much worse than -SGD. Changing the model architecture/dataset. For this remark, let us focus on CIFAR-100 trained on Res Net-18 versus on Res Net-50, with α = 0.1, illustrated in Figure 3(A). SGD and SGDM (both with and without LR decay), Adagrad, and SPS perform worse using Res Net-50 than Res Net-18. This is a counter-intuitive behavior, as one would expect to get better accuracy by using a more powerful model, although the step sizes are tuned on Res Net-18. -SGD is an exception: without any additional tuning, -SGD can improve its performance. Adam also improves similarly, but the achieved accuracy is significantly worse than that of -SGD. We now focus on cases where the dataset changes, but the model architecture remains the same. We mainly consider two cases, illustrated in Figure 3(B) and (C): When CNN is trained for classifying MNIST versus FMNIST, and when Res Net-18 is trained for CIFAR-10 versus CIFAR-100. From (B), one can infer the difficulty in tuning SGDM: without the LR decay, SGDM only achieves around 78% accuracy for MNIST classification, while SGD without LR decay still achieves over 96% accuracy. For FMNIST on the other hand, SGDM performs relatively well, but adaptive methods like Adam, Adagrad, and SPS degrade significantly. Aggravating the complexity of fine-tuning SGD(M) for MNIST, one can observe from (C) a similar trend in CIFAR-10 trained with Res Net-18. In that case, -SGD does not achieve the best test accuracy (although it is the second best with a pretty big margin with the rest), while SGD with LR decay does. However, without LR decay, the accuracy achieved by SGD drops more than 11%. Again, adaptive methods like Adam, Adagrad, and SPS perform suboptimally in this case. Figure 3: The effect of changing the dataset and the model architecture on different client optimizers. -SGD remains superior performance without additional tuning when model or dataset changes, whereas other methods often degrades in performance. (A): CIFAR-100 trained with Res Net-18 versus Resnet-50 (α = 0.1), (B): MNIST versus FMNIST trained with CNN (α = 0.01), (C): CIFAR-10 versus CIFAR-100 trained with Res Net-18 (α = 0.01). Published as a conference paper at ICLR 2024 Changing the domain: text classification. In Table 2a, we make a bigger leap, and test the performance of different client optimizers for text classification tasks. We use the DBpedia and AGnews datasets (Zhang et al., 2015), and train a Distill BERT (Sanh et al., 2019) for classification. Again, the best step size for each client optimizer was tuned for the CIFAR-10 image classification trained with a Res Net-18, for α = 0.1. Thus, the type of the dataset changed completely from image to text, as well as the model architecture: from vision to language. Not surprisingly, four methods (SGDM, SGDM( ), Adam, and Adagrad) ended up learning nothing (i.e., the achieved accuracy is 1/(# of labels)), indicating the fine-tuned step sizes for these methods using CIFAR-10 classification task was too large for the considered text classification tasks. -SGD, on the other hand, still achieves competitive accuracies without additional tuning, thanks to the locality adaptive step size. Interestingly, SGD, SGD( ), and SPS worked well for the text classification tasks, which is contrary to their suboptimal performances for image classification tasks in Table 1. Changing the loss function. In Table 2b, we illustrate how -SGD performs when combined with a different loss function, such as Fed Prox (Li et al., 2020). Fed Prox aims to address heterogeneity; we thus used the lowest concentration parameter, α = 0.01. We only present a subset of results: CIFAR-10(100) classification with Res Net-18(50) (additional results in Appendix B.6). The results show that -SGD again outperforms all other methods with a significant margin: -SGD is not only the best, but also the performance difference is at least 2.5%, and as large as 26.9%. Yet, compared to the original setting without the proximal term in Table 1, it is unclear whether or not the additional proximal term helps. For -SGD, the accuracy increased for CIFAR-10 with Res Net-18, but slightly got worse for CIFAR-100 with Res Net-50; a similar story applies to other methods. Text classification Dataset / Model α = 1 Agnews Dbpedia Optimizer Distill BERT SGD 91.1 (0.5) 96.0 (2.9) SGD ( ) 91.6 (0.0) 98.7 (0.2) SGDM 25.0 (66.6) 7.1 (91.8) SGDM ( ) 25.0 (66.6) 7.1 (91.8) Adam 25.0 (66.6) 7.1 (91.8) Adagrad 25.0 (66.6) 7.1 (91.8) SPS 91.5 (0.1) 98.9 (0.0) -SGD 90.7 (0.9) 98.6 (0.3) (a) Experimental results for text classification Fed Prox Dataset / Model α = 0.01 CIFAR-10 CIFAR-100 Optimizer Res Net-18 Res Net-50 SGD 20.0 (13.8) 25.2 (5.9) SGD ( ) 31.3 (2.5) 20.2 (10.8) SGDM 29.3 (4.4) 23.8 (7.2) SGDM ( ) 25.3 (8.5) 15.0 (16.0) Adam 28.1 (5.7) 22.6 (8.4) Adagrad 19.3 (14.5) 4.1 (26.9) SPS 27.6 (6.2) 16.5 (14.5) -SGD 33.8 (0.0) 31.0 (0.0) (b) Experimental results using Fed Prox loss. Table 2: Additional experiments for a different domain (a), and a different loss function (b). Other findings. We present all the omitted theorems and proofs in Appendix A, and additional empirical findings in Appendix B. Specifically, we demonstrate: the robustness of the performance of -SGD (Appendix B.1), the effect of the different numbers of local iterations (Appendix B.2), additional experiments when the size of local dataset is different per client (Appendix B.3), additional experimental results using Fed Adam (Reddi et al., 2021) (Appendix B.4), additional experiential results using MOON (Appendix B.5) and Fed Prox (Appendix B.6) loss functions, and three independent trials for a subset of tasks (Appendix B.7). We also visualize the step size conditions for -SGD (Appendix B.8) as well as the degree of heterogeneity (Appendix B.9). 5 CONCLUSION In this work, we proposed -SGD, a distributed SGD scheme equipped with an adaptive step size that enables each client to use its own step size and adapts to the local smoothness of the function each client is optimizing. We proved the convergence of -SGD in the general nonconvex setting and presented extensive empirical results, where the superiority of -SGD is shown in various scenarios without any tuning. For future works, extending -SGD to a coordinate-wise step size in the spirit of Duchi et al. (2011); Kingma & Ba (2014), applying the step size to more complex algorithms like (local) factored gradient descent (Kim et al., 2022a), and enabling asynchronous updates (Assran et al., 2020; Toghani et al., 2022; Nguyen et al., 2022) could be interesting directions. Published as a conference paper at ICLR 2024 ACKNOWLEDGMENTS This work is supported by NSF FET: Small No. 1907936, NSF MLWi NS CNS No. 2003137 (in collaboration with Intel), NSF CMMI No. 2037545, NSF CAREER award No. 2145629, NSF CIF No. 2008555, Rice Inter Disciplinary Excellence Award (IDEA), NSF CCF No. 2211815, NSF DEB No. 2213568, and Lodieska Stockbridge Vaughn Fellowship. Naman Agarwal, Ananda Theertha Suresh, Felix Xinnan X Yu, Sanjiv Kumar, and Brendan Mc Mahan. cp SGD: Communication-efficient and differentially-private distributed SGD. Advances in Neural Information Processing Systems, 31, 2018. Larry Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal of mathematics, 16(1):1 3, 1966. Mahmoud Assran and Michael Rabbat. On the convergence of nesterov s accelerated gradient method in stochastic settings. ar Xiv preprint ar Xiv:2002.12414, 2020. Mahmoud Assran, Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson, and Michael G Rabbat. Advances in asynchronous parallel and distributed optimization. Proceedings of the IEEE, 108(11):2013 2031, 2020. Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, and Yisong Yue. Automatic gradient descent: Deep learning without hyperparameters. ar Xiv preprint ar Xiv:2304.05187, 2023. Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Volkan Cevher and Bang Cˆong V u. On the linear convergence of the stochastic gradient method with constant step-size. Optimization Letters, 13(5):1177 1187, 2019. Aaron Defazio and Konstantin Mishchenko. Learning-rate-free learning by d-adaptation. In International Conference on Machine Learning, pp. 7449 7479. PMLR, 2023. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341 2368, 2013. Robert M Gower, Mark Schmidt, Francis Bach, and Peter Richt arik. Variance-reduced methods for machine learning. Proceedings of the IEEE, 108(11):1968 1983, 2020. Robert M Gower, Peter Richt arik, and Francis Bach. Stochastic quasi-gradient methods: Variance reduction via jacobian sketching. Mathematical Programming, 188:135 192, 2021. Elad Hazan and Sham Kakade. Revisiting the polyak step size. ar Xiv preprint ar Xiv:1905.00313, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770 778, 2016. Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. ar Xiv preprint ar Xiv:1909.06335, 2019. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26, 2013. Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in federated learning. ar Xiv preprint ar Xiv:2008.03606, 2020. Published as a conference paper at ICLR 2024 Ahmed Khaled, Konstantin Mishchenko, and Peter Richt arik. Tighter theory for local sgd on identical and heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pp. 4519 4529. PMLR, 2020. Junhyung Lyle Kim, Mohammad Taha Toghani, C esar A Uribe, and Anastasios Kyrillidis. Local stochastic factored gradient descent for distributed quantum state tomography. IEEE Control Systems Letters, 7:199 204, 2022a. Junhyung Lyle Kim, Panos Toulis, and Anastasios Kyrillidis. Convergence and stability of the stochastic proximal point algorithm with momentum. In Learning for Dynamics and Control Conference, pp. 1034 1047. PMLR, 2022b. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized sgd with changing topology and local updates. In International Conference on Machine Learning, pp. 5381 5393. PMLR, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Guanghui Lan, Soomin Lee, and Yi Zhou. Communication-efficient algorithms for decentralized and stochastic optimization. Mathematical Programming, 180(1-2):237 284, 2020. Kfir Y Levy, Alp Yurtsever, and Volkan Cevher. Online adaptive methods, universality and acceleration. Advances in neural information processing systems, 31, 2018. Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10713 10722, 2021. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429 450, 2020. Xianfeng Liang, Shuheng Shen, Jingchang Liu, Zhen Pan, Enhong Chen, and Yifei Cheng. Variance reduced local sgd with lower communication complexity. ar Xiv preprint ar Xiv:1912.12844, 2019. Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence. In International Conference on Artificial Intelligence and Statistics, pp. 1306 1314. PMLR, 2021. Yura Malitsky and Konstantin Mishchenko. Adaptive gradient descent without descent. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 6702 6712. PMLR, 2020. Brendan Mc Mahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273 1282. PMLR, 2017. Sohom Mukherjee, Nicolas Loizou, and Sebastian U Stich. Locally adaptive federated learning via stochastic polyak stepsizes. ar Xiv preprint ar Xiv:2307.06306, 2023. Yu Nesterov. Universal gradient methods for convex optimization problems. Mathematical Programming, 152(1-2):381 404, 2015. John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Mike Rabbat, Mani Malek, and Dzmitry Huba. Federated learning with buffered asynchronous aggregation. In International Conference on Artificial Intelligence and Statistics, pp. 3581 3607. PMLR, 2022. Francesco Orabona and D avid P al. Coin betting and parameter-free online learning. Advances in Neural Information Processing Systems, 29, 2016. Courtney Paquette and Katya Scheinberg. A stochastic line search method with expected complexity analysis. SIAM Journal on Optimization, 30(1):349 376, 2020. Published as a conference paper at ICLR 2024 Boris Teodorovich Polyak. Minimization of nonsmooth functionals. Zhurnal Vychislitel noi Matematiki i Matematicheskoi Fiziki, 9(3):509 521, 1969. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. ar Xiv e-prints, pp. ar Xiv 1910, 2019. Guannan Qu and Na Li. Accelerated distributed nesterov gradient descent. IEEE Transactions on Automatic Control, 65(6):2566 2581, 2019. Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Koneˇcn y, Sanjiv Kumar, and Hugh Brendan Mc Mahan. Adaptive federated optimization. In International Conference on Learning Representations, 2021. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ar Xiv preprint ar Xiv:1910.01108, 2019. Kevin Scaman, Francis Bach, S ebastien Bubeck, Yin Tat Lee, and Laurent Massouli e. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In international conference on machine learning, pp. 3027 3036. PMLR, 2017. Mark Schmidt and Nicolas Le Roux. Fast convergence of stochastic gradient descent under a strong growth condition. ar Xiv preprint ar Xiv:1308.6370, 2013. Sebastian U Stich. Local sgd converges fast and communicates little. ar Xiv preprint ar Xiv:1805.09767, 2018. Mohammad Taha Toghani, Soomin Lee, and C esar A Uribe. Pers A-FL: Personalized Asynchronous Federated Learning. ar Xiv preprint ar Xiv:2210.01176, 2022. Panos Toulis and Edoardo M Airoldi. Asymptotic and finite-sample properties of estimators based on stochastic gradients. 2017. C esar A Uribe, Soomin Lee, Alexander Gasnikov, and Angelia Nedi c. A dual approach for optimal algorithms in distributed optimization over networks. In 2020 Information Theory and Applications Workshop (ITA), pp. 1 37. IEEE, 2020. Sharan Vaswani, Francis Bach, and Mark Schmidt. Fast and faster convergence of sgd for overparameterized models and an accelerated perceptron. In The 22nd international conference on artificial intelligence and statistics, pp. 1195 1204. PMLR, 2019a. Sharan Vaswani, Aaron Mishkin, Issam Laradji, Mark Schmidt, Gauthier Gidel, and Simon Lacoste Julien. Painless stochastic gradient: Interpolation, line-search, and convergence rates. Advances in neural information processing systems, 32, 2019b. Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, and Gauri Joshi. Local adaptivity in federated learning: Convergence and consistency. ar Xiv preprint ar Xiv:2106.02305, 2021. Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. The Journal of Machine Learning Research, 21(1):9047 9076, 2020. Cong Xie, Oluwasanmi Koyejo, Indranil Gupta, and Haibin Lin. Local adaalter: Communicationefficient stochastic gradient descent with adaptive learning rates. ar Xiv preprint ar Xiv:1911.09030, 2019a. Cong Xie, Sanmi Koyejo, and Indranil Gupta. Asynchronous federated optimization. ar Xiv preprint ar Xiv:1903.03934, 2019b. Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent. SIAM Journal on Optimization, 26(3):1835 1854, 2016. Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive methods for nonconvex optimization. Advances in neural information processing systems, 31, 2018. Xiang Zhang, Junbo Zhao, and Yann Le Cun. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28, 2015. Published as a conference paper at ICLR 2024 SUPPLEMENTARY MATERIALS FOR ADAPTIVE FEDERATED LEARNING WITH AUTO-TUNED CLIENTS In this appendix, we provide all missing proofs that were not present in the main text, as well as additional plots and experiments. The appendix is organized as follows. In Appendix A, we provide all the missing proofs. In particular: In Section A.1, we provide the proof of Algorithm 1, under Assumption 1. In Section A.2, we provide the proof of Algorithm, 1, with some modified assumptions, including that fi is convex for all i [m]. In Appendix B, we provide additional experiments, as well as miscellaneous plots that were missing in the main text due to the space constraints. Specifically: In Section B.1, we provide some perspectives on the ease of tuning of -SGD in Algorithm 1. In Section B.2, we provide some perspectives on how -SGD performs using different number of local iterations. In Section B.3, we provide additional experimental results where each client has different number of samples as local dataset. In Section B.4, we provide additional experimental results using Fed Adam (Reddi et al., 2021) server-side adaptive method. In Section B.5, we provide additional experimental results using MOON (Li et al., 2021) loss function. In Section B.6, we provide additional experiemtns using Fed Prox (Li et al., 2020) loss function. In Section B.7, we perform three independent trials for a subset of tasks for all client-optimizers. We plot the average and the standard deviations in Figure 6, and report those values in Table 7. In Section B.8, we visualize the step size conditions for -SGD. In Section B.9, we provide illustration of the different level of heterogeneity induced by the Dirichlet concentration parameter α. A MISSING PROOFS We first introduce the following two inequalities, which will be used in the subsequent proofs: 2 xj 2, and (7) for any set of m vectors {xi}m i=1 with xi Rd. We also provide the following lemma, which establishes L-smoothness based on the local smoothness. Lemma 2. For locally smooth function fi, the following inequality holds: fi(y) fi(x) + fi(x), y x + Li,t 2 y x 2 x, y Si,t, (9) where Si,t := conv({xi t,k}K k=1 {xt, xt+1}). Proof. First, notice that by Assumptions (1a) and (1b), stochastic gradients are also bounded for all x Rd. Therefore, the local iterates, i.e., {xi t,k}K k=0 remain bounded. Hence, the set Si,t := conv({xi t,k}K k=1 {xt, xt+1}), where conv( ) denotes the convex hull, is bounded. Due to the fact Published as a conference paper at ICLR 2024 that fi is locally smooth, f is Lipschitz continuous on bounded sets. This implies there exists a positive constant Li,t such that, fi(x) fi(y) Li,t y x x, y Si,t. (10) Thus, we have fi(y) fi(x), y x fi(y) fi(x) y x Li,t y x 2, x, y Si,t. Let g(τ) = fi(x + τ(y x)). Then, we have g (τ) = fi(x + τ(y x)), y x g (τ) g (0) = fi(x + τ(y x)) fi(x), y x fi(x + τ(y x)) fi(x) y x τ Li,t y x 2. Using the mean value theorem, we have fi(y) = g(1) = g(0) + Z 1 0 [g (0) + τ Li,t y x 2]dτ g(0) + g (0) + Li,t fi(x) + fi(x), y x + Li,t Now, we define Li := maxt Li,t. Given this definition, we relate the local-smoothness constant L of the average function f with (individual) local-smoothness constants Li in the below lemma. Lemma 3. The average of Li-local smooth functions fi are at least ( 1 m Pm i=1 Li)-local smooth. Proof. Starting from (10), we have: i=1 fi(x) 1 i=1 fi(y) = 1 fi(x) fi(y) fi(x) fi(y) i=1 Li x y (11) L x y , where L = max i,t Li,t. (12) Technically, (11) can provide a slightly tighter bound for Theorem 1, but (12) simplifies the final expression. We also have the following form: f(xt+1) f(xt) + f(xt), xt+1 xt + L 2 xt+1 xt 2, which can be obtained from Lemma 2. Published as a conference paper at ICLR 2024 A.1 PROOF OF ALGORITHM 1 UNDER ASSUMPTION 1 Proof. Based on Algorithm 1, we use the following notations throughout the proof: xi t,0 = xt, xi t,k = xi t,k 1 ηi t,k fi(xi t,k 1), k [K], xt+1 = 1 |St| i St xi t,k, where we denoted with xt+1 as the server parameter at round t + 1, which is the average of the local parameters xi t,k for all client i [m] after running k local iterations for k [K]. We also recall that we use fi(x) = 1 |B| P z B Fi(x, z) to denote the stochastic gradients with batch size |B| = b. Note that whenever we use this shorthand notation in the theory, we imply that we draw a new independent batch of samples to compute the stochastic gradients. We start from the smoothness of f( ), which we note again is obtained via Lemma 3, i.e., as a byproduct of our analysis, and we do not assume that fi s are globally L-smooth. Also note that Lemma 3 holds for {x0, x1, . . . }, i.e., the average of the local parameters visited by Algorithm 1. Proceeding, we have f(xt+1) f(xt) D f(xt), 1 k=0 ηi t,k fi(xi t,k) E + L 2 k=0 ηi t,k fi(xi t,k) 2 . Taking expectation from the expression above, where we overload the notation such that the expectation is both with respect to the client sampling randomness (indexed with i) as well as the data sampling randomness (indexed with z), that is E( ) := EFt [Ei [Ez[ | Ft, i]|Ft]] , where Ft is the natural filtration, we have: f(xt) E D f(xt), 1 k=0 ηi t,k fi(xi t,k) E + L 2 E 1 k=0 ηi t,k fi(xi t,k) 2 (13) f(xt) E D f(xt), 1 k=0 ηi t,k fi(xi t,k) E + L 2m k=0 ηi t,k fi(xi t,k) 2 (14) = f(xt) Dt f(xt) 2 Dt E D f(xt), 1 fi(xi t,k) f(xt) E k=0 ηi t,k fi(xi t,k) 2 , (15) where in the equality, we added and subtracted 1 m Pm i=1 PK 1 k=0 ηi t,k f(xt) in the inner product term, and also replaced Dt := 1 m Pm i=1 PK 1 k=0 ηi t,k. Then, using (7) and (13)-(15), we have Ef(xt+1) f(xt) Dt f(xt) 2 + Dt 2 f(xt) 2 (16) fi(xi t,k) f(xt) 2 + L 2m k=0 ηi t,k fi(xi t,k) 2 2 f(xt) 2 + 1 2Dt E 1 k=0 ηi t,k fi(xi t,k) f(xt) 2 k=0 ηi t,k fi(xi t,k) 2 (17) Published as a conference paper at ICLR 2024 + 1 2Dt E 1 k=0 ηi t,k fi(xt) fi(xi t,k) + fi(xt) f(xt) 2 k=0 ηi t,k fi(xi t,k) fi(xi t,k) + fi(xi t,k) fi(xt) + fi(xt) f(xt) + f(xt) 2 (18) We now expand the terms in (18) and bound each term separately, as follows. Ef(xt+1) f(xt) Dt k=0 ηi t,k fi(xt) fi(xi t,k) 2 k=0 ηi t,k fi(xt) f(xt) 2 k=0 ηi t,k fi(xi t,k) fi(xi t,k) k=0 ηi t,k fi(xi t,k) fi(xt) k=0 ηi t,k fi(xt) f(xt) k=0 ηi t,k f(xt) Now, we provide an upper bound for each term in (19). Starting with A1, using (7), (8), and (12), we have k=0 (ηi t,k)2 E fi(xt) fi(xi t,k) 2 (20) k=0 (ηi t,k)2 E xt xi t,k 2 k=0 (ηi t,k)2 E ℓ=0 ηi t,l fi(xi t,ℓ) 2 k=0 (ηi t,k)2 E ℓ=0 ηi t,l fi(xi t,ℓ) fi(xi t,ℓ) + fi(xi t,ℓ) 2 Published as a conference paper at ICLR 2024 k=0 (ηi t,k)2h E ℓ=0 ηi t,l fi(xi t,ℓ) fi(xi t,ℓ) 2 + E ℓ=0 ηi t,l fi(xi t,ℓ) 2i k=0 k(ηi t,k)2 k 1 X ℓ=0 (ηi t,l)2h E fi(xi t,ℓ) fi(xi t,ℓ) 2 + E fi(xi t,ℓ) 2i k=0 k(ηi t,k)2 k 1 X ℓ=0 (ηi t,l)2 σ2 k=0 (ηi t,k)2 2 . (21) For A2, using (7) and (8), we have: k=0 (ηi t,k)2 E fi(xt) f(xt) 2 f(xt) 2 m X k=0 (ηi t,k)2. (22) For A3, using (7), (8), and (12), we have: k=0 (ηi t,k)2 E fi(xi t,k) fi(xi t,k) 2 k=0 (ηi t,k)2. (23) For A4, using (7), (8), and (12), we have: k=0 (ηi t,k)2 E fi(xt) fi(xi t,k) 2 (24) k=0 (ηi t,k)2 E xt xi t,k 2 k=0 (ηi t,k)2 E ℓ=0 ηi t,l fi(xi t,ℓ) 2 k=0 (ηi t,k)2 E ℓ=0 ηi t,l fi(xi t,ℓ) fi(xi t,ℓ) + fi(xi t,ℓ) 2 k=0 (ηi t,k)2h E ℓ=0 ηi t,l fi(xi t,ℓ) fi(xi t,ℓ) 2 + E ℓ=0 ηi t,l fi(xi t,ℓ) 2i k=0 k(ηi t,k)2 k 1 X ℓ=0 (ηi t,l)2h E fi(xi t,ℓ) fi(xi t,ℓ) 2 + E fi(xi t,ℓ) 2i k=0 k(ηi t,k)2 k 1 X ℓ=0 (ηi t,l)2 σ2 k=0 (ηi t,k)2 2 . (25) Published as a conference paper at ICLR 2024 For A5, using (7), (8), and (12), we have: k=0 (ηi t,k)2 E fi(xt) f(xt) 2 f(xt) 2 m X k=0 (ηi t,k)2. (26) For A6, using (7), (8), and (12), we have: f(xt) 2 m X k=0 (ηi t,k)2. (27) Putting all bounds together. Now, by putting (20) (27) back into (19) and rearranging the terms, we obtain the following inequality: k=0 (ηi t,k)2 2K L(1 + ρ) k=0 (ηi t,k)2 f(xt) 2 f(xt) f(xt+1) + 2K2 L2 σ2 k=0 (ηi t,k)2 2 k=0 (ηi t,k)2 + 4K2 L3 σ2 k=0 (ηi t,k)2 2 . (28) Now, it remains to notice that, by definition, ηi t,k = O(γ), for all i [m], k {0, 1, . . . K 1}, and t 0. Therefore, Dt = O(γK). Thus, by dividing the above inequality by Dt 2 , we have: k=0 (ηi t,k)2 | {z } O(1) 4K L(1 + ρ) k=0 (ηi t,k)2 | {z } O(γK) 2(f(xt) f(xt+1)) k=0 (ηi t,k)2 | {z } O(γK) + 4K2 L2 σ2 k=0 (ηi t,k)2 2 | {z } O(γ2K2) + 8K2 L3 σ2 k=0 (ηi t,k)2 2 | {z } O(γ3K3) for ρ = O(1). For the simplicity of exposition, assuming ρ 1 4 and T (5/ρ)2 = 400, we obtain the below by averaging (29) for t = 0, 1, . . . , T 1: t=0 f(xt) 2 O f(x0) f b + G2 γ2K2 b + G2 γ3K3 . (30) Published as a conference paper at ICLR 2024 Finally, by choosing γ = O 1 K , and defining Ψ1 = max n σ2 b , f(x0) f(x ) o and Ψ2 = σ2 b + G2 with b = |B| being the batch size, we arrive at the statement in Theorem 1, i.e., t=0 E f(xt) 2 O Ψ1 A.2 PROOF OF ALGORITHM 1 FOR CONVEX CASE Here, we provide an extension of the proof of Malitsky & Mishchenko (2020, Theorem 1) for the distributed version. Note that the proof we provide here is not exactly the same version of -SGD presented in Algorithm 1; in particular, we assume no local iterations, i.e., k = 1 for all i [m], and every client participate, i.e, p = 1, without stochastic gradients. We start by constructing a Lyapunov function for the distributed objective in (1). Lemma 4. Let fi : Rd R be convex with locally Lipschitz gradients. Then, the sequence {xt} generated by Algorithm 1, assuming k = 1 and p = 1 with full batch, satisfy the following: xt+1 x 2 + 1 i=1 xi t+1 xi t 2 + 2 h ηi t(1 + θi t) fi(xi t) fi(x ) i i=1 xi t xi t 1 2 + 2 h ηi tθi t fi(xi t 1) fi(x ) i . (31) The above lemma, which we prove below, constructs a contracting Lyapunov function, which can be seen from the second condition on the step size ηi t : 1 + θi tηi t = ηi t+1θi t+1 (1 + θi t)ηi t. Proof. We denote xt = 1 m Pm i=1 xi t. xt+1 x 2 = xt 1 i=1 ηi t fi(xi t) x (32) = xt x 2 + 1 i=1 ηi t fi(xi t) 2 2 D xt x , 1 i=1 ηi t fi(xi t) E (33) We will first bound the second term in (33). Using Pm i=1 ai 2 m Pm i=1 ai 2, we have i=1 ηi t fi(xi t) 2 = 1 i=1 (xi t xi t+1) 2 1 i=1 xi t xi t+1 2. (34) To bound xi t xi t+1 2, observe that xi t xi t+1 2 = 2 xi t xi t+1 2 xi t xi t+1 2 = 2 ηi t fi(xi t), xi t xi t+1 xi t xi t+1 2 = 2ηi t fi(xi t) fi(xi t 1), xi t+1 xi t xi t xi t+1 2 + 2ηi t fi(xi t 1), xi t+1 xi t (35) For the first term in (35), using Cauchy-Schwarz and Young s inequalities as well as ηi t xi t xi t 1 2 fi(xi t) fi(xi t 1) , we have: 2ηi t fi(xi t) fi(xi t 1), xi t+1 xi t 2ηi t fi(xi t) fi(xi t 1) xi t+1 xi t Published as a conference paper at ICLR 2024 xi t xi t 1 xi t+1 xi t 2 xi t xi t 1 2 + 1 2 xi t+1 xi t 2. For the third term in (35), by convexity of fi(x), we have 2ηi t fi(xi t 1), xi t+1 xi t = 2 ηi t ηi t 1 xi t 1 xi t, xi t+1 xi t = 2θi tηi t xi t 1 xi t, fi(xi t) 2θi tηi t[fi(xi t 1) fi(xi t)]. Together, we have i=1 xi t xi t+1 2 1 n 1 2 xi t xi t 1 2 1 2 xi t+1 xi t 2 + 2θi tηi t[fi(xi t 1) fi(xi t)] o . (36) Now to bound the last term in (33), we have 2 D xt x , 1 i=1 ηi t fi(xi t) E = 2 D x xi t, fi(xi t) E i=1 ηi t fi(x ) fi(xi t) , (37) where in the equality we used the fact that xi t = xt, for k = 1. Putting (36) and (37) back to (33), we have xt+1 x 2 = xt x 2 + 1 i=1 ηi t fi(xi t) 2 2 D xt x , 1 i=1 ηi t fi(xi t) E n 1 2 xi t xi t 1 2 1 2 xi t+1 xi t 2 + 2θi tηi t[fi(xi t 1) f(xi t)] o 2 D xt xi t, 1 i=1 ηi t fi(xi t) E + 2 1 i=1 ηi t fi(x ) fi(xi t) . (38) Averaging (38) for i = 1, . . . , m, notice that the first term in (38) disappears, as ˆxt = 1 m Pm i=1 xi. xt+1 x 2 + 1 h 2ηi t(1 + θi t) fi(xi t) fi(x ) + 1 2 xi t+1 xi t 2i h 2ηi tθi t fi(xi t 1) fi(x ) + 1 2 xi t xi t 1 2i . Theorem 5. Let fi : Rd R be convex with locally Lipschitz gradients. Then, the sequence {xt} generated by Algorithm 1, assuming k = 1 and p = 1 with full batch, satisfy the following: fi( xi t) fi(x ) 2DˆL2 2ˆLt + 1 , (39) xi t = ηi t(1 + θi t)xi t + Pt 1 k=1 αi kxi k Si t , αi k = ηi k(1 + θi k) ηi k+1θi k+1 Si t = ηi t(1 + θi t) + k=1 ηi k + ηi 1θi 1, and ˆL is a constant. Further, if x is any minimum of fi(x) for all i [m], (39) implies convergence for problem (1). Published as a conference paper at ICLR 2024 Proof. We start by telescoping (31). Then, we arrive at xt+1 x 2 + 1 i=1 xi t+1 xi t 2 + 2 h ηi t(1 + θi t) fi(xi t) fi(x ) ηi k(1 + θi k) ηi k+1θi k+1 fi(xi k) fi(x ) i (40) i=1 xi 1 xi 0 2 + 2 i=1 ηi 1θi 1 fi(xi 0) fi(x ) := D. (41) Observe that (40) is nonnegative by the definition of ηi t, implying the iterates {xi t} are bounded. Now, define the set S := conv({xi 0, xi 1, . . . }), which is bounded as the convex hull of bounded points. Therefore, fi is Lipschitz continuous on S. That is, there exist ˆLi such that fi(x) fi(y) ˆLi x y , x, y S. Also note that (40) is of the form αi t[fi(xi t) fi(x )] + Pt 1 k=1 αi k[fi(xi k) fi(x )] = Pt k=1 αi k[fi(xi k) fi(x )]. The sum of the coefficients equals: k=1 αi k = ηi t(1 + θi t) + k=1 [ηi k(1 + θi k) ηi k+1θi k+1] = ηi 1θi 1 + k=1 ηi k := Si t. (42) Therefore, by Jensen s inequality, we have D x1 x 2 + 1 i=1 xi 1 xi 0 2 + 2Si t fi( xi t) fi(x ) 2Si t fi( xi t) fi(x ) , (43) which implies fi( xi t) fi(x ) Di 2Si t , where xi t = ηi t(1 + θi t)xi t + Pt 1 k=1 αi kxi k Si t , αi k = ηi k(1 + θi k) ηi k+1θi k+1 Si t = ηi t(1 + θi t) + k=1 ηi k + ηi 1θi 1. (44) Now, notice that by definition of ηi t, we have ηi t = xi t xi t 1 2 f(xi t) f(xi t 1) 1 2ˆLi , i [m]. Also, notice that ηi 1θi 1 = (ηi 1)2 ηi 0 = (ηi 1)2, assuming for simplicity that ηi 0 = 1. Thus, going back to (44), we have k=1 ηi k + ηi 1θi 1 where ˆL = maxi ˆLi. Finally, we have i=1 Si t fi( xi t) fi(x ) fi( xi t) fi(x ) , which implies fi( xi t) fi(x ) . Published as a conference paper at ICLR 2024 B ADDITIONAL EXPERIMENTS AND PLOTS B.1 EASE OF TUNING In this subsection, we show the effect of using different parameter δ in the step size of -SGD. As mentioned in Section 4, for -SGD, we append δ in front of the second condition, and hence the step size becomes ηi t,k = min n γ xi t,k xi t,k 1 2 fi(xi t,k) fi(xi t,k 1) , q 1 + δθi t,k 1ηi t,k 1 o . The experimental results presented in Table 1 were obtained using a value of δ = 0.1. For γ, we remind the readers that we did not change from the default value γ = 2 in the original implementation in https://github.com/ymalitsky/adaptive_GD/blob/master/pytorch/ optimizer.py. In Figure 4, we display the final accuracy achieved when using different values of δ from the set {0.01, 0.1, 1}. Interestingly, we observe that δ has very little impact on the final accuracy. This suggests that -SGD is remarkably robust and does not require much tuning, while consistently achieving higher test accuracy compared to other methods in the majority of cases as shown in Table 1. Figure 4: Effect of using different δ in the second condition of the step size of -SGD. B.2 EFFECT OF DIFFERENT NUMBER OF LOCAL ITERATIONS In this subsection, we show the effect of the different number of local epochs, while fixing the experimental setup to be the classification of CIFAR-100 dataset, trained with a Res Net-50. We show for all three cases of the Dirichlet concentration parameter α {0.01, 0.1, 1}. The result is shown in Figure 5. As mentioned in the main text, instead of performing local iterations, we instead perform local epochs E, similarly to (Reddi et al., 2021; Li et al., 2020). All the results in Table 1 use E = 1, and in Figure 5, we show the results for E {1, 2, 3}. Note that in terms of E, the considered numbers might be small difference, but in terms of actual local gradient steps, they differ significantly. As can be seen, using higher E leads to slightly faster convergence in all cases of α. However, the speed-up seems marginal compared to the amount of extra computation time it requires (i.e., using E = 2 takes roughly twice more total wall-clock time than using E = 1). Put differently, -SGD performs well with only using a single local epoch per client (E = 1), and still can achieve great performance. Published as a conference paper at ICLR 2024 Figure 5: Effect of the different number of local epochs. B.3 ADDITIONAL EXPERIMENTS USING DIFFERENT NUMBER OF LOCAL DATA PER CLIENT Non-iidness Optimizer Dataset / Model Dir(α p) CIFAR-10 CIFAR-100 Res Net-18 Res Net-50 SGD 80.7 (0.0) 53.5 (4.0) SGD ( ) 78.8 (1.9) 53.6 (3.9) SGDM 75.0 (5.7) 53.9 (3.6) SGDM ( ) 66.6 (14.1) 53.1 (4.4) Adam 79.9 (0.8) 51.1 (6.4) Adagrad 79.3 (1.4) 44.5 (13.0) SPS 64.4 (16.3) 37.2 (20.3) -SGD 80.4 (0.3) 57.5 (0.0) Table 3: Experimental results using different number of local data per client. Performance difference within 0.5% of the best result for each task are shown in bold. Subscripts (x.x) is the performance difference from the best result and is highlighted in pink when the difference is bigger than 2%. The symbol ( ) appended to SGD and SGDM indicates step-wise learning rate decay, where the step sizes are divided by 10 after 50%, and another by 10 after 75% of the total rounds. In this subsection, we provide additional experimental results, where the size of the local dataset each client has differs. In particular, instead of distributing 500 samples to each client as done in Section 4 of the main text, we distribute ni [100, 500] samples for each client i, where ni is the number of samples that client i has, and is randomly sampled from the range [100, 500]. We keep all other settings the same. Then, in the server aggregation step (line 13 of Algorithm 1), we compute the weighted average of the local parameters, instead of the plain average, as suggested in (Mc Mahan et al., 2017), and used for instance in (Reddi et al., 2021). We experiment in two settings: CIFAR-10 dataset classification trained with a Res Net-18 (He et al., 2016), with Dirichlet concentration parameter α = 0.1, which was the setting where the algorithms were fined-tuned using grid search; please see details in Section 4. We also provide results for CIFAR-100 dataset classification trained with a Res Net-50, with the same α. The result can be found in Table 3. Similarly to the case with same number of data per client, i.e., ni = 500 for all i [m], we see that -SGD achieves good performance without any additional tuning. In particular, for CIFAR-10 classification trained with a Res Net-18, -SGD achieves the second best performance after SGD (without LR decay), with a small margin. Interestingly, SGDM (both with and without LR decay) perform much worse than SGD. This contrasts with the same setting in Table 1, where SGDM performed better than SGD, which again complicates how one should tune client optimizers in the FL setting. For CIFAR-100 classification trained with Res Net-50, -SGD outperforms all other methods with large margin. Published as a conference paper at ICLR 2024 B.4 ADDITIONAL EXPERIMENTS USING FEDADAM As mentioned in Section 2 of the main text, since our work focuses on client adaptivity, our method can seamlessly be combined with server-side adaptivity like Fed Adam (Reddi et al., 2021). Here, we provide additional experiments using the Fed Adam server optimizer. A big downside of the adaptive server-side approach like Fed Adam is that it adds at least one more hyperparameter to tune: the step size for the global optimizer, set aside the two momentum parameters (β1 and β2) in the case of Adam. This necessity to tune additional parameters is quite orthogonal to the approach we take, where we try to remove the need for step size tuning. Indeed, as can be seen in Reddi et al. (2021)[Appendix D.2], 9 different server-side global optimizer step size is grid-searched, and Reddi et al. (2021)[Table 8] shows the best performing server learning rate differs for each task. That being said, we simply used the default step size and momentum values of Adam (from Pytorch), to showcase that our method can seamlessly combined with Fed Adam. Due to time constraints, we tested up to CIFAR-100 trained with a Res Net-18, so that all datasets are covered. The result can be found below. For three out of four cases, -SGD equipped with Adam server-side optimizer achieves the best accuracy with large margin. For the last case, CIFAR-100 trained with a Res Net-18, SGD with decaying step size achieves noticeably better accuracy than -SGD; however, we refer the readers to Table 1 where for the same setting, -SGD equipped with simple averaging achieves 61.1% accuracy, without any tuning. Additional experiments using Fed Adam (Reddi et al., 2021) Non-iidness Optimizer Dataset / Model Dir(α p) MNIST FMNIST CIFAR-10 CIFAR-100 CNN CNN Res Net-18 Res Net-18 SGD 97.3 (0.6) 83.7 (2.1) 52.0 (11.8) 46.7 (2.5) SGD ( ) 96.4 (1.4) 80.9 (4.9) 49.1 (14.7) 49.2 (0.0) SGDM 97.5 (0.4) 84.6 (1.2) 53.7 (10.1) 13.3 (35.9) SGDM ( ) 96.4 (1.5) 81.8 (4.0) 53.3 (10.5) 16.8 (32.4) Adam 96.4 (1.5) 81.5 (4.3) 27.8 (36.0) 38.3 (10.9) Adagrad 95.7 (2.2) 82.1 (3.7) 10.4 (53.4) 1.0 (48.2) SPS 96.6 (1.3) 85.0 (0.8) 21.6 (42.2) 1.6 (47.6) -SGD 97.9 (0.0) 85.8 (0.0) 63.8 (0.0) 41.9 (7.3) Table 4: Additional experiments using Fed Adam (Reddi et al., 2021). B.5 ADDITIONAL EXPERIMENTS USING MOON Here, we provide additional experimental results using MOON (Li et al., 2021), which utilizes model-contrastive learning to handle heterogeneous data. In this sense, the goal of MOON is very similar to that of Fed Prox (Li et al., 2020); yet, due to the model-contrastive learning nature, it can incur bigger memory overhead compared to Fed Prox, as one needs to keep track of the previous model(s) to compute the representation of each local batch from the local model of last round, as well as the global model. As such, we only ran using MOON for CIFAR-10 classification using a Res Net-18; for the CIFAR100 classification using a Res Net-50, our computing resource ran out of memory to run the same configuration we ran in the main text. The results are summarized in Table 5. As can be seen, even ignoring the fact that MOON utilizes more memory, MOON does not help for most of the optimizers it actually often results in worse final accuracies than the corresponding results in Table 1 from the main text. Published as a conference paper at ICLR 2024 Non-iidness Optimizer Dataset / Model Dir(α p) CIFAR-10 Res Net-18 SGD 78.2 (4.9) SGD ( ) 74.2 (8.9) SGDM 76.4 (6.7) SGDM ( ) 75.5 (14.1) Adam 82.4 (0.6) Adagrad 81.3 (1.8) SPS 9.57 (73.5) -SGD 83.1 (0.0) Table 5: Experimental results using MOON loss function. Performance differences within 0.5% of the best result for each task are shown in bold. Subscripts (x.x) is the performance difference from the best result and is highlighted in pink when the difference is bigger than 2%. The symbol ( ) appended to SGD and SGDM indicates step-wise learning rate decay, where the step sizes are divided by 10 after 50%, and another by 10 after 75% of the total rounds. B.6 ADDITIONAL EXPERIMENTS USING FEDPROX Here, we provide the complete result of Table 2b, conducting the experiment using Fed Prox loss function for the most heterogeneous case (α = 0.01) for all datasets and architecture considered. The result can be found below, where -SGD achieves the best accuracy in three out of five cases (FMNIST trained with a CNN, CIFAR-10 trained with a Res Net-18, and CIFAR-100 with Res Net-50); in the remaining two cases (MNIST trained with a CNN and CIFAR-100 trained with a Res Net-18), - SGD achieves second-best accuracy with close margin to the best. Additional experiments using Fed Prox loss function (Li et al., 2020) Non-iidness Optimizer Dataset / Model Dir(α p) MNIST FMNIST CIFAR-10 CIFAR-100 CIFAR-100 CNN CNN Res Net-18 Res Net-18 Res Net-50 SGD 95.7 (1.4) 79.0 (1.6) 20.0 (13.8) 30.3 (0.0) 25.2 (5.9) SGD ( ) 97.2 (0.0) 79.3 (1.3) 31.3 (2.5) 29.5 (0.8) 20.2 (10.8) SGDM 73.8 (23.4) 72.8 (7.8) 29.3 (4.4) 22.9 (7.5) 23.8 (7.2) SGDM ( ) 81.2 (16.0) 78.0 (2.6) 25.3 (8.5) 19.8 (10.5) 15.0 (16.0) Adam 82.3 (14.8) 65.6 (15.0) 28.1 (5.7) 24.8 (5.6) 22.6 (8.4) Adagrad 76.3 (20.9) 51.2 (29.4) 19.3 (14.5) 12.9 (17.4) 4.1 (26.9) SPS 85.5 (11.7) 62.1 (18.45) 27.6 (6.2) 21.3 (9.0) 16.5 (14.5) -SGD 96.9 (0.26) 80.6 (0.0) 33.8 (0.0) 29.7 (0.6) 31.0 (0.0) Table 6: Additional experiments using Fed Prox loss function. B.7 ADDITIONAL PLOT WITH STANDARD DEVIATION (3 RANDOM SEEDS) In this section, we repeat for three independent trials (using three random seeds) for two tasks: CIFAR-10 classification trained with a Res Net-18, and FMNIST classification trained with a shallow CNN, both with Dirichlet α = 0.1,. The aim is to study the robustness of -SGD in comparison with other client optimizers. We remind the readers that CIFAR-10 classification trained with a Res Net-18 with Dirichlet α = 0.1 is where we performed grid-search for each client optimizer, and thus each are using the best step size that we tried. Then, for FMNIST classification, the same step sizes from the previous task are used. The result can be found below. Published as a conference paper at ICLR 2024 We see that -SGD not only achieves the best average test accuracy, but also achieves very small standard deviation. Critically, none of the other client optimizers achieve good performance in both settings, other than the exception of -SGD. Figure 6: Additional plots for CIRAR-10 classification trained with a Res Net-18, FMNIST classification trained with a CNN, and CIFAR-100 classification trained with a Res Net-50, repeated for three independent trials. The average is plotted, with shaded area indicating the standard deviation. B.8 STEP SIZE PLOT FOR -SGD Here, we visualize the step size conditions for -SGD to see how the proposed step size looks like in practice. We only plot the first 300 epochs, as otherwise the plot gets quite messy. From Figure 7, we can see that both conditions for ηi t,k are indeed necessary. The first condition, plotted in green, approximates the local smoothness of fi, but can get quite oscillatory. The second condition, plotted in blue, effectively restricts the first condition from taking too large values. Published as a conference paper at ICLR 2024 Non-iidness Optimizer Dataset / Model Dir(α p) CIFAR-10/Res Net-18 FMNIST/CNN CIFAR-100/Res Net-50 Average Std. Dev. Average Std. Dev. Average Std. Dev. SGD 73.96 (9.93) 0.0263 83.43 (1.78) 0.0178 49.44 (14.58) 0.0376 SGD ( ) 67.79 (16.10) 0.1550 77.83 (7.38) 0.0815 40.26 (23.76) 0.1687 SGDM 77.59 (6.30) 0.0142 83.27 (1.94) 0.0186 53.80 (10.22) 0.0149 SGDM ( ) 72.11 (11.78) 0.0611 81.36 (3.85) 0.0606 46.77 (17.25) 0.0762 Adam 83.24 (0.65) 0.0126 79.98 (5.23) 0.0228 58.89 (5.13) 0.0098 Adagrad 83.86 (0.03) 0.0048 53.06 (32.15) 0.0487 49.93 (14.09) 0.0177 SPS 68.38 (15.51) 0.0192 84.59 (0.62) 0.0154 43.16 (20.86) 0.0072 -SGD 83.89 (0.00) 0.0052 85.21 (0.00) 0.0152 64.02 (0.00) 0.0051 Table 7: CIRAR-10 classification trained with a Res Net-18 and FMNIST classification trained with a CNN, and CIFAR-100 classification trained with a Res Net-50, repeated for three independent trials. The average among three trials and the standard deviation for each client optimizer are reported. Figure 7: Step size conditions for -SGD. In green, we plot the first condition, and in blue, we plot the second condition for ηi t,k, which is plotted in red color taking the minimum of the two conditions. B.9 ILLUSTRATION OF THE DEGREE OF HETEROGENEITY As mentioned in the main text, the level of non-iidness is controlled using latent Dirichlet allocation (LDA) applied to the labels. This approach is based on (Hsu et al., 2019) and involves assigning each client a multinomial distribution over the labels. This distribution determines the labels from which the client s training examples are drawn. Specifically, the local training examples for each client are sampled from a categorical distribution over N classes, parameterized by a vector q. The vector q is drawn from a Dirichlet distribution with a concentration parameter α times a prior distribution vector p over N classes. To vary the level of non-iidness, different values of α are considered: 0.01, 0.1, and 1. A larger α corresponds to settings that are closer to i.i.d. scenarios, while smaller values introduce more diversity and non-iidness among the clients, as visualized in Figure 8. As can be seen, for α = 1, each client (each row on y-axis) have fairly diverse classes, similarly to the i.i.d. case; as we decrease α, the number of classes that each client has access to decreases. With α = 0.01, there are man y clients who only have access to a single or a couple classes. Published as a conference paper at ICLR 2024 Figure 8: Illustration of the degree of heterogeneity induced by using different concentration parameter α for Dirichlet distribution, for CIFAR-10 dataset (10 colors) and 100 clients (100 rows on y-axis).