# maestro_uncovering_lowrank_structures_via_trainable_decomposition__6758a417.pdf Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Samuel Horváth 1 Stefanos Laskaridis 2 Shashank Rajput 3 Hongyi Wang 4 Abstract Deep Neural Networks (DNNs) have been a large driver for AI breakthroughs in recent years. However, these models have been getting increasingly large as they become more accurate and safe. This means that their training becomes increasingly costly and time-consuming and typically yields a single model to fit all targets. Various techniques have been proposed in the literature to mitigate this, including pruning, sparsification, or quantization of model weights and updates. While achieving high compression rates, they often incur significant computational overheads at training or lead to non-negligible accuracy penalty. Alternatively, factorization methods have been leveraged for low-rank compression of DNNs. Similarly, such techniques (e.g., SVD) frequently rely on heavy iterative decompositions of layers and are potentially sub-optimal for non-linear models, such as DNNs. We take a further step in designing efficient low-rank models and propose MAESTRO, a framework for trainable low-rank layers. Instead of iteratively applying a priori decompositions, the low-rank structure is baked into the training process through LOD, a low-rank ordered decomposition. Not only is this the first time importance ordering via sampling is applied on the decomposed DNN structure, but it also allows selecting ranks at a layer granularity. Our theoretical analysis demonstrates that in special cases LOD recovers the SVD decomposition and PCA . Applied to DNNs, MAESTRO enables the extraction of lower footprint models that preserve performance. Simultaneously, it enables the graceful trade-off between accuracy-latency for deployment to even more constrained devices without retraining. 1Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE 2Brave Software, London, UK 3Data Bricks, San Fransisco, USA 4Carnegie Mellon University, Pittsburgh, USA. Correspondence to: Samuel Horváth , Stefanos Laskaridis . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). 1. Introduction Deep Learning has been experiencing an unprecedented uptake, with models achieving a (super-)human level of performance in several tasks across modalities, giving birth to even more intelligent assistants (Radford et al., 2023) and next-gen visual perception and generative systems (Radford et al., 2021). However, the price of this performance is that models are getting significantly larger, with training and deployment becoming increasingly costly (Laskaridis et al., 2024). Therefore, techniques from Efficient ML become evermore relevant (Wan et al., 2023), and a requirement for deployment in constrained devices, such as smartphones or Io T devices (Laskaridis et al., 2022). Typical techniques to compress the network involve i) quantization, i.e., reducing precision of the model (Wang et al., 2019) or communicated updates (Seide et al., 2014; Alistarh et al., 2017), ii) pruning the model during training, e.g., through Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2019), iii) sparsification of the network representation and updates, i.e., dropping the subset of coordinates (Suresh et al., 2017; Alistarh et al., 2018) or iv) lowrank approximation (Wang et al., 2021; Dudziak et al., 2019), i.e. keeping the most relevant ranks of the decomposed network. Despite the benefits during deployment, that is a lower footprint model, in many cases, the overhead during training time or the accuracy degradation can be non-negligible. Moreover, many techniques can introduce multiple hyperparameters or the need to fine-tune to recover the lost accuracy. In this work, we focus training low-rank factorization. Specifically, we pinpoint the challenges of techniques (Wang et al., 2021; 2023) when decomposing the parameters of each layer in low-rank space and the need to find the optimal ranks for each one at training time. To solve this, we propose LOD (Low-rank ordered Decomposition), a non-trivially extended version of Ordered Dropout technique from Horváth et al. (2021), applied to progressively find the optimal decomposition for each layer of a DNN while training (Fig. 1). Critical differences to prior work include i) the non-uniformity of the search space (i.e. we allow for different ranks per layer), ii) the trainable aspect of the decomposition to reflect the data distribution, and iii) the gains to training and deployment time without sacrificing Maestro: Uncovering Low-Rank Structures via Trainable Decomposition accuracy. Nevertheless, we also provide a latency-accuracy trade-off mechanism to deploy the model on even more constrained devices. Our contributions can be summarized as follows: We propose MAESTRO1, a novel layer decomposition technique that enables learning low-rank layers in a progressive manner while training. We fuse layer factorization and ordered dropout into LOD, an extended variant of the ordered dropout, in a novel manner, by embedding ordered importance directly into the factorized weights. By decomposing layers and training on stochastically sampled low-rank models, we apply ordered importance decomposed representation of each layer. We combine this with a hierarchical group-lasso term (Yuan & Lin, 2006) in the loss function to zero out redundant ranks and progressively shrink the rank space. This way, we enable computationally efficient training achieved by the proposed decomposition without relying on inexact and potentially computationally expensive iterative decompositions such as Singular Value Decomposition (SVD). MAESTRO is fundamentally a theoretically motivated approach that embeds decomposition into training. First, we show that our new objective can recover i) the SVD of the target linear mapping for the particular case of uniform data distribution and ii) the Principal Component Analysis (PCA) of the data in the case of identity mapping. As MAESTRO s decomposition is part of the training process, it also accounts for data distribution and the target function, contrary to SVD, which operates directly on learned weights. We show that this problem already arises for a simple linear model and empirically generalize our results in the case of DNNs, by applying our method to different types of layers (including fully-connected, convolutional, and attention) spanning across three datasets and modalities. We illustrate that our technique achieves better results than SVD-based baselines at a lower cost. Indicatively, MAESTRO can achieve on par or better results than lowrank SOTA methods on vision datasets (CIFAR-10, Image Net) with lower training overhead due to progressive shrinking, while at the same time it reaches 6% lower perplexity at a quarter of the computational cost and half of the parameters of SVD-variants in Transformer models. 2. Related Work The topic of Efficient ML has received a lot of attention throughout the past decade as networks have been getting increasingly computationally expensive. We distinguish between training and deployment time, with the latter having 1The implementation can be found here: https://github.com/Samuel Horvath/Maestro-Lo D a more significant impact and thus amortizes the potential overhead during training. Nevertheless, cost optimisation is evermore relevant for training large models, and the advent of Federated Learning (Mc Mahan et al., 2017), efficient training becomes increasingly relevant to remain tractable. Efficient inference. For efficient deployment, various techniques have been proposed that either optimize the architecture of the DNN in a hand-crafted (Howard et al., 2017) or automated manner (i.e. NAS) (Tan & Le, 2019), they remove redundant computation by means of pruning parts of the network (Han et al., 2015; Carreira-Perpinán & Idelbayev, 2018; Frankle & Carbin, 2019; Chen et al., 2021; Sreenivasan et al., 2022; Li et al., 2016; Wen et al., 2016; Hu et al., 2016; Wen et al., 2016; Zhu & Gupta, 2017; He et al., 2017; Yang et al., 2017; Liu et al., 2018; Yu & Huang, 2019b), in a structured or unstructured manner, or utilise low-precision representation (Wang et al., 2019) of the neurons and activations. However, such techniques may involve non-negligible training overheads or lack flexibility of variable footprint upon deployment. Closer to our method, there have been techniques leveraging low-rank approximation (e.g. SVD) for efficient inference (Xue et al., 2013; Sainath et al., 2013; Jaderberg et al., 2014; Wiesler et al., 2014; Dudziak et al., 2019). Last, there is a category of techniques that dynamically resize the network at runtime for compute, memory or energy efficiency, based on early-exiting (Laskaridis et al., 2021) or dynamic-width (Yu et al., 2019) and leverage the accuracy-latency tradeoff. Efficient training. On the other hand, techniques for efficient training become very relevant nowadays when scaling DNNs sizes (Hu et al., 2021) or deploying to embedded devices (Lin et al., 2022), and oftentimes offer additional gains at deployment time. Towards this goal, there have been employed methods where part of the network is masked (Sidahmed et al., 2021) or dropped (Alam et al., 2022; Caldas et al., 2019; Wu et al., 2018) during training, with the goal of minimizing the training footprint. Similarly to early-exiting, multi-exit variants for efficient training (Kim et al., 2023; Liu et al., 2022) have been proposed, and the same applies for width-based scaling (Horváth et al., 2021; Diao et al., 2021). Last but not least, in the era of transformers and LLMs, where networks have scaled exponentially in size, PEFT-based techniques, such as adapterbased fine-tuning (Houlsby et al., 2019) (such as Lo RA (Hu et al., 2021)), become increasingly important and make an important differentiator for tackling downstream tasks. Learning ordered representation. Ordered Dropout (OD) was proposed as a mechanism for importance-based pruning for the easy extraction of sub-networks devised to allow for heterogeneous federated training (Horváth et al., 2021). Similar constructions were proposed to be applied in the representation layer of autoencoders (Rippel et al., 2014) to en- Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Original Mapping Factorized Mapping Structural Mapping Ranks 1 3 2 Remove Redundant Approximate Mapping Lo D Low-rank ordered Decomposition i {1,2, , rank K,K+1} i = 1 i = 2 i = 3 Figure 1: MAESTRO s construction. To obtain low-rank approximation, the given linear map is decomposed and trained with LOD to obtain an ordered representation that can be efficiently pruned. force identifiability of the learned representation or the last layer of the feature extractor (Horváth et al., 2021) to learn an ordered set of features for transfer learning. Contrary to prior work, MAESTRO s LOD non-trivially extends ordered representation in three meaningful ways. First, it is the first work that is applied to the decomposed network, which gets progressively shrunk as redundant ranks converge to zero. This is achieved through hierarchical group lasso penalty, as described in Sec. 3.3. Second, LOD allows for heterogeneous (non-uniform) ranks per layer, yielding a much richer operating space. Last, through LOD we can leverage the ordered representation of ranks at inference time to further compress the model, allowing a graceful accuracy-latency tradeoff for deployment on more constrained devices, without the need to retrain. In this work, we focus on low-rank models as a technique to reduce the neural network model s computational complexity and memory requirements. The main challenge that we face is the selection of the optimal rank or the tradeoff between the efficiency and the rank for the given layer. Therefore, we devise an importance-based training technique, MAESTRO, which learns not only a mapping between features and responses but also learns the decomposition of the trained network. This is achieved by factorizing all the layers in the network. 3.1. Formulation Low-rank approximation. Our inspiration comes from the low-rank matrix approximation of a matrix A Rm n. For simplicity, we assume that A has rank at most r = min{m, n} with k r distinct non-zero singular values σ1 > σ2 > . . . > σk > 0, with corresponding left and right singular vectors u1, u2, . . . , uk Rm and v1, v2, . . . , vk Rn, respectively. For such a matrix, we can rewrite its best l-rank approximation as the following minimization min U Rm l,V Rn l Pl i=1 uiv i A 2 where ci denotes the i-th column of matrix C and F denotes Frobenius norm. We note that Problem (1) is nonconvex and non-smooth. However, (Ye & Du, 2021) showed that the randomly initialized gradient descent algorithm solves this problem in polynomial time. In this work, we consider the best rank approximation across all the ranks. Eckart Young Mirsky theorem leads to the following objective min U Rm r,V Rn r 1 r Pr b=1 U:b V :b A 2 F , (2) where C:b denotes the first b columns of matrix C. This objective, up to scaling, recovers SVD of A exactly, and for the case of distinct non-zero singular values, the solution is, up to scaling, unique (Horváth et al., 2021). This formulation, however, does not account for the data distribution, i.e., it cannot tailor the decomposition to capture specific structures that appear in the dataset. Lo D for data-dependent low-rank approximation. Therefore, the next step of our construction is to extend this problem formulation with data that can further improve compression, reconstruction, and generalization and incorporate domain knowledge. We assume that data comes from the distribution x X centered around zero, i.e., Ex X [x] = 0.2, and the response is given by y = Ax. In this particular case, we can write the training loss as min U Rm r,V Rn r Ex,y X U:b V :b x y 2 # It is important to note that the introduced problem formulation (3) for the neural network with a single hidden layer and no activations can be solved using stochastic algorithms 2We make this assumption for simplicity. It can be simply overcome by adding a bias term into the model. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition by sampling from the data distribution X (subsampling) and rank distribution D. When we apply LOD to DNNs, contrary to any prior work (Horváth et al., 2021; Rippel et al., 2014; Diao et al., 2021), our formulation is the first one to apply importance-based ordered dropout to the decomposed representation of each layer, thus preserving the dimensionality. More importantly, we decompose each layer independently and allow for finding the optimal rank per layer in a data-informed manner. We discuss details in the next paragraph. DNN low-rank approximation. For Deep Neural Networks (DNNs), we seek to uncover the optimal ranks for a set of d linear mappings W 1 Rm1 n1, . . . , W d Rmd nd , where W i s are model parameters and d is model depth, e.g., weights corresponding to linear layers3, by decomposing them as W i = U i V i . We discuss how these are selected in the next section. To decompose the network, we aim to minimize the following objective " 1 Pd i=1 ri b=1 l(h(U 1 V 1 , . . . , U i :b V i :b , . . . , U d V d , W o, x), y) i , where ri = min{mi, ni}, l is a loss function, h is a DNN, and W o are the other weights that we do not decompose. We note that our formulation aims to decompose each layer, while decompositions across layers do not directly interact. The motivation for this approach is to uncover low-rank structures within each layer that are not affected by inaccuracies from other layers due to multiple low-rank approximations. 3.2. Layer Factorization The following sections discuss how we implement model factorization for different architectures. FC layers. A 2-layer fully connected (FC) neural network can be expressed as f(x) = σ(σ(x W1)W2), where Ws are weight matrices of each FC layer, and σ( ) is any arbitrary activation function, e.g., Re LU. The weight matrix W can be factorized as UV . CNN layers. For a convolution layer with dimension, W Rm n k k where m and n are the number of input and output channels, and k is the size of the convolution filters. Instead of directly factorizing the 4D weight of a convolution layer, we factorize the unrolled 2D matrix. Unrolling the 4D tensor W leads to a 2D matrix with shape Wunrolled Rmk2 n, where each column represents the weight of a vectorized convolution filter. Factorization can 3We can apply our decomposition on different types of layers, such as Linear, Convolutional and Transformers as shown in Sec. 3.2. Algorithm 1: MAESTRO (Training Process) Input: epochs E, dataset D, model h parametrized by U 1 Rm1 r1, V 1 Rn1 r1, . . . , U d Rmd rd, V d Rnd rd, W o, and hyperparameters λgl, εps 1 for t 0 to E 1 do // Epochs 2 for (x, y) D do // Iterate over dataset 3 Sample (i, b) {(i, b)}ri b=1 d i=1; // Lo D l(h(U 1 V 1 , . . . , U i :b V i :b , . . . , U d V d , W o, x), y) +λgl Pd i=1 Pri b=1 U i b: + V i b: // Loss 5 L.backward() // Update weights 7 for i 1 to d do 8 for b 1 to ri do 9 // rank importance thresholding 10 if V i b: U i b: εps then 11 ri = b 1 // progressive shrinking then be conducted on the unrolled 2D matrix; see (Wang et al., 2021) for details. Transformers. A Transformer layer consists of a stack of encoders and decoders (Vaswani et al., 2017). The encoder and decoder contain three main building blocks: the multihead attention layer, position-wise feed-forward networks (FFN), and positional encoding. We factorize all trainable weight matrices in the multi-head attention (MHA) and the FFN layers. The FFN layer factorization can directly adopt the strategy from the FC factorization. A p-head attention layer learns p attention mechanisms on the key, value, and query (K, V, Q) of each input token: MHA(Q, K, V ) = Concat(head1, . . . , headp)W O. Each head performs the computation of: headi = Attention(QW (i) Q , KW (i) K , V W (i) V ) QW (i) Q W (i) K K p V W (i) V . where d is the hidden dimension. The trainable weights W (i) Q , W (i) K , W (i) V , i {1, 2, . . . , p} can be factorized by simply decomposing all learnable weights W in an attention layer and obtaining U V (Vaswani et al., 2017). 3.3. Training Techniques Having defined the decomposition of typical layers found in DNNs, we move to formulate the training procedure of our method, formally described in Algorithm 1. Training the model comprises an iterative process of propagating forward on the model by sampling a rank bi per decomposed layer i Maestro: Uncovering Low-Rank Structures via Trainable Decomposition up to maximal rank ri (line 3). We calculate the loss, which integrates an additional hierarchical group lasso component (lines 4) and backpropagate on the sampled decomposed model (line 5). At the end of each epoch, we progressively shrink the network by updating the maximal rank ri, based on an importance threshold εps (line 11). We provide more details about each component below. Efficient training via sampling. In Sec. 4, we show that for the linear case (3), the optimal solution corresponds to PCA over the linearly transformed dataset. This means that the obtained solution contains orthogonal directions. This property is beneficial because it directly implies that when we employ gradient-based optimization, not only is the gradient zero at the optimum, but the gradient with respect to each summand in Equation (3) is also zero. The same property is directly implied by overparametrization (Ma et al., 2018) or strong growth condition (Schmidt & Roux, 2013). As a consequence, this enables us to sample only one summand at a time and obtain the same quality solution. When considering (4) as an extension to (3), it is unclear whether this property still holds, which would also imply that the set of stationary points of (3) is a subset of stationary points of the original objective without decomposition. However, in the experiments, we observed that sampling is sufficient to converge to a good-quality solution. If this only holds approximately, one could leverage fine-tuning to recover the loss in performance. Efficient rank extraction via hierarchical group-lasso. By definition, (3) leads to an ordered set of ranks for each layer. This ordered structure enables efficient rank extraction and selection. To effectively eliminate unimportant ranks while retaining the important ones, thus leading to a more efficient model, we consider Hierarchical Group Lasso (HGL) (Lim & Hastie, 2015) in the form U i b: + V i b: , (5) where Cb: denotes the matrix that contains all the columns of C except for the first b 1 columns. Progressive shrinking. HGL encourages that unimportant ranks become zero and can be effectively removed from the model. To account for this, for each layer we remove V i b: and U i b: (i.e., set ri = b 1) if V i b: U i b: εps, where εps is a pre-selected threshold and a hyperparameter of our method. Hyperparameter optimization. We provide an algorithm for finding the optimal value for the hyperparameter λgl in Alg. 2. From the evaluation of Tab. 11-15, this strategy typically requires at most 2-3 times the computational effort (in terms of FLOPs) compared to a single training loop with an optimally chosen λgl. This is significantly easier than tuning the per-layer maximal rank and the pretraining steps, where the full-rank model is being pretrained, as is the case in other low-rank baselines. Equivalently, the value of εps represents the effective zero-point in our algorithm, typical value of which was 1e 7 in our experiments. Algorithm 2: MAESTRO (Hyper-parameter optimization) Input: constraints (e.g., min required accuracy, max #parameters), epochs E, dataset D, model h, evaluation frequency Eeval every, εps, limits for HPO: large Value, small Value 1 λgl = large Value 2 old_model = NULL 3 while λgl > small Value do 4 model = Rand Init(model) 5 for t 0 to E 1 do // Epochs 6 Train(model, dataset, λgl) 7 if t mod Eeval every == 0 then 8 acc = Calculate Acc(model, dataset) 9 flops, params = Measure Footprint(model, εps) 10 if acc constraints[ acc ] and params constraints[ params ] then 11 return model // model satisfying constraints was found 13 if params < few_params then 14 break // model too sparse 18 flops, params = Measure Footprint(model, εps) 19 if params > constraints[ params ] then 20 /** no model satisfies constraints, return the last model satisfying parameters constraints**/ 21 return old_model 23 old_model = Copy(model) 24 λgl = λgl / 2 Initialization. Initialization is a key component of the training procedure (He et al., 2015; Mishkin & Matas, 2015). To adopt the best practices from standard non-factorized training, we follow a similar approach to (Khodak et al., 2021; Wang et al., 2021), where we first initialize the nonfactorized model using standard initialization. For initializing factorized layers, we use the Singular Value Decomposition of the non-factorized initialization in a full-rank form to ensure that the resulting product matrix is the same as the original parameter decomposition. In addition, SVD is an optimal decomposition for the linear case with uniform data. However, in contrast with the adaptive baseline method (Wang et al., 2023) we only decompose once, rather than on every training iteration. As such, we only run decomposition once and progressively shrink the ranks in a data-centric manner. This is contrary to related work (Wang et al., 2021; 2023) that requires manual rank and layer selection and full-rank warmup to achieve the Maestro: Uncovering Low-Rank Structures via Trainable Decomposition desired performance, at the cost of training overhead, of course. 3.4. Train-Once, Deploy-Everywhere Up until now, we have described how our method works for training low-rank models, which yield computational, memory, network, and energy (Wu et al., 2022) bandwidth benefits during training. At deployment time, one can directly deploy the final model (rank ri for each layer) on the device, which we acquire from performing a threshold sweep of εps over the effective range of rank importance across layers. However, in case we want to run on even more constrained devices, such as mobile or embedded (Almeida et al., 2021) systems, the learned decomposition also gives us the flexibility to further compress the model in a straightforward manner, effectively trading off accuracy for a smaller model footprint. Inspired by (Yu & Huang, 2019a), we propose to use greedy search. We begin with the current model and compare model performance across various low-rank models, each created by removing a certain percentage of ranks from each layer. We then eliminate the ranks that cause the least decrease in performance. This process is iterated until we reach the desired size or accuracy constraint. To make this approach efficient, we estimate the loss using a single mini-batch with a large batch size (e.g., 2048). This also avoids issues with Batch Norm layers; see (Yu & Huang, 2019a) for details. In summary, MAESTRO comprises a technique for trainable low-rank approximation during training time that progressively compresses the model, reflecting the data distribution, and a method that enables a graceful trade-off between accuracy and latency for embedded deployment, by selecting the most important parts of the network. We validate these claims in Sec. 5.2 and 5.5, respectively. 4. Theoretical Guarantees In this section, we further investigate the theoretical properties of MAESTRO for the linear mappings, i.e., the setup of the problem formulation (3). Theorem 4.1 (Informal). Let A = U Σ V be a SVD decomposition of A. Then, the minimization problem (3) is equivalent to PCA applied to the transformed dataset x Σ V x, x X projected on the column space of U. The formal statement can be found in Appendix C. Theorem 4.1 shows that MAESTRO can adapt to data distribution by directly operating on data x X and also to the target mapping by projecting data to its right singular vectors scaled by singular values. In particular, we show that in the special case, when X is the uniform distribution on the unit ball, (3), i.e., MAESTRO, exactly recovers truncated SVD of A, which is consistent with the prior results (Horváth et al., 2021). In the case A is the identity, it is straightforward to see that MAESTRO is equivalent to PCA. We can see that MAESTRO can efficiently extract low-rank solutions by filtering out directions corresponding to the null space of the target mapping A and directions with no data. We also numerically verify both of the special cases PCA and SVD, by minimizing (3) using stochastic gradient descent (SGD) with D being the uniform distribution. These experiments are provided in Fig. 2a and 2b. We provide further evidence on the adaptivity of MAESTRO in Appendix E.1 and E.2. We showed that MAESTRO could recover SVD in a particular case of the linear model and the uniform data distribution on the unit ball. We note that in this case, SVD is optimal, and we cannot acquire better decomposition. Therefore, it is desired that MAESTRO is equivalent to SVD in this scenario. More generally, we argue that MAESTRO decomposition should be preferable to SVD due to the following reasons: MAESTRO formulation is directly built into the training and tailored to obtain the best low-rank decomposition, while SVD relies on linearity assumption. SVD does not account for data, and even in the linear NN case, the learned singular vectors might exhibit wrong ordering. We demonstrate this issue using a simple example where we take matrix A with rank 3. We construct the dataset X in such a way that the third singular vector is the most important, the second one is the second, and the first is the third most important direction. Clearly, SVD does not look at data. Therefore, it cannot capture this phenomenon. We showcase that MAESTRO learns the correct order; see Fig. 6 of the Appendix. Pre-factorizing models allow us to apply hierarchical group-lasso penalty (Yuan & Lin, 2006) for decomposed weights to directly regularize the rank of different layers. SVD is computationally expensive and can only run rarely, while MAESTRO is directly built into the training and, therefore, does not require extra computations. In addition, MAESTRO supports rank sampling so training can be made computationally efficient. 5. Experiments We start this section by describing the setup of our experiments, including the models, datasets and baselines with which we compare MAESTRO. We then compare MAESTRO against the baselines on accuracy and training Multiply Accumulate operations (MACs) and discuss the results. Subsequently, we analyze the behaviour of our system in-depth and provide additional insights on the performance of our technique, along with an ablation study and sensitivity analysis to specific hyperparameters. Finally, we showcase the performance of models upon deployment and how we can derive a smaller footprint model with some accuracy tradeoff, without the need to fine-tune. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition 0 250 500 750 1000 1250 1500 1750 2000 Iterations i = 1 uivi Ak||2 k = ... (p = ...) 1 (0.2) 2 (0.3) 3 (0.5) 4 (0.7) 5 (0.8) 6 (1.0) (a) Verification that MAESTRO recovers SVD for linear mapping with uniform data. We display the L2 distance between the best rank k and MAESTRO s approximation of mapping A. The target matrix was randomly generated 9 6 matrix with rank 3. p and k represent relative and actual rank. 0 250 500 750 1000 1250 1500 1750 2000 Iterations Singular Values k = ... (p = ...) 1 (0.1) 2 (0.2) 3 (0.3) 4 (0.4) 5 (0.5) 6 (0.6) 7 (0.7) 8 (0.8) 9 (0.9) 10 (1.0) (b) Verification that MAESTRO recovers PCA for identity mapping. The plot displays the estimates of singular values. The data distribution has only 3 directions. It is expected that the top 3 ranks will converge to value one and the rest to zero. p and k stand for relative and actual rank, respectively. Figure 2: Empirical showcase of theoretical properties of the MAESTRO s formulation. 0 2 4 6 8 10 Params. (M) Accuracy (%) GMACs Cuttlefish Maestro Non-Factorised Pufferfish XNOR-Net (a) Res Net18. 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Params. (M) Accuracy (%) GMACs Cuttlefish Maestro Non-Factorized Pufferfish Spectral Init. XNOR-Net (b) VGG-19. Figure 3: Maestro vs. baselines on CIFAR10. Spectral-Init results is taken from the original work; For XNOR-Net each weight is quantized from 32 to 1-bit. Thus, we report a compression rate of 3.125%; Detailed results are presented in table form in the Appendix E.5. Table 2: Datasets and models for evaluation. The footprints depict the vanilla models. Dataset Model # GMACs # Params (M) Task MNIST Le Net 2e 4 0.04 Image classification CIFAR10 Res Net-18 0.56 11.18 Image classification CIFAR10 VGG-19 0.40 20.00 Image classification Image Net Res Net-50 4.12 25.56 Image classification Multi30k 6-layer Transformer 1.37 48.98 Translation (en-ge) Table 3: Maestro vs. baselines on Multi30k. Variant Model Perplexity GMACs Params. (M) Non-factorized Transformer 9.85 0.10 1.370 53.90 Pufferfish Transformer 7.34 0.12 0.996 26.70 MAESTRO Transformer 6.90 0.07 0.248 13.80 Results from original work; tuned λgp from {2i/100; i 0, . . . , 9} 5.1. Experimental Setup Models & datasets. The datasets and models considered in our experiments span across four datasets, concisely presented along with the associated models on Tab. 2. We have implemented our solution in Py Torch (Paszke et al., 2017)(v1.13.0) trained our models on NVidia A100 (40G) GPUs. Details for the learning tasks and hyperparameters used are presented in Appendix D. Baselines. We have selected various baselines from the literature that we believe are closest to aspects of our system. On the pruning front, we compare with the IMP (Paul et al., 2023) and Rare Gems (Sreenivasan et al., 2022) techniques. On the quantization front, we compare with XNORNet (Rastegari et al., 2016). With respect to low-rank methods, we compare with Spectral Initialisation (Khodak et al., 2021), Pufferfish (Wang et al., 2021) and Cuttlefish (Wang et al., 2023). 5.2. Performance Comparison We start off by comparing MAESTRO with the mentioned baselines from the literature across the datasets and models of Tab. 24. Results are depicted in Fig. 3 and Tab. 3, while additional performance points of MAESTRO for different model footprints are presented in the Appendix E.3 and E.4. Comparisons with low-rank methods. The low-rank methods we are comparing against are Pufferfish (Wang et al., 2021) and Cuttlefish (Wang et al., 2023). These methods try to reduce training and inference runtime while preserving model accuracy by leveraging low-rank approximations. For Res Net-18, we achieve 94.19 0.07% for 4.08M parameters and 93.97 0.25% for 2.19M parameters compared to the 94.17% of Pufferfish at 3.3M parameters. For VGG-19, we achieve +0.41pp (percentage points) higher accuracy compared to Pufferfish and -0.29pp to Cuttlefish at 44.8% and 93.2% of the sizes, respectively. Finally, comparing with the spectral initialization (Khodak et al., 2021) for VGG-19, we achieve +5.26pp higher accuracy for 87.5% of parameter size. Detailed results are shown in Tab. 16. This performance benefits also apply in the case of Transformers (Tab. 3), where MAESTRO performs 6% better in terms of perplexity at 25% of the cost (MACs) and 51.7% 4The operating points we select for MAESTRO are the closest lower to the respective baseline in terms of footprint. Where the result is not present in the Fig. 3, we provide the λgp value so that it can be referenced from the Appendix, Tab. 12, 13. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition 0 50 100 150 200 250 300 Epochs Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 Maestro: = 0.001024 (a) Total rank (Pd i=1 ri). 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Layer Index Full Rank Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 Maestro: = 0.001024 (b) Ranks ri s after training. 0 50 100 150 200 250 300 Epochs (c) Convergence for λgl = 0. Figure 4: Training dynamics of MAESTRO for Res Net18 on CIFAR10. Results for other datasets can be found in Appendix E.3. Table 4: Maestro vs. baselines on Image Net-1k. Variant Model Acc. (%) Params. (M) GMACs No decomposition Non-factorized Res Net-50 75.32 25.26 4.12 Not decomposing first four blocks and last layer Pufferfish Res Net-50 75.99 15.2 3.6 Cuttlefish Res Net-50 76.00 14.9 3.6 MAESTRO Res Net-50 76.04 14.0 3.4 Decomposing all layers Pufferfish Res Net-50 71.03 9.4 2.1 MAESTRO Res Net-50 71.54 9.2 2.0 λgl chosen such that the final number of parameters and accuracy is similar to the baseline models; without label smoothing (same as our setup for Maestro) of the size (parameters) compared to Pufferfish. It is worth noting that both Pufferfish and Cuttlefish, by default, do not decompose all layers and have warm-up full-training rounds, both of which cause training and hyperparameter optimization overheads. In contrast, our technique only introduces two hyperarameters, namely λgl and εps, which govern the whole training process. We have scaled up our experiments to Image Net-1k levels (Tab. 4) and for the same setup of full decomposition, we achieve slightly higher accuracy (+0.51pp) at 97.8% of the size of Pufferfish. For partial decomposition, MAESTRO performs on par with Pufferfish and Cuttlefish at a lower training and inference cost. Comparisons with pruning methods. The next family of baselines is related to the LTH (Frankle & Carbin, 2019). Specifically, we compare against IMP (Paul et al., 2023) and witness that MAESTRO can achieve +1.25pp (λgp = 128e 6) and +0.24pp (λgp = 32e 6) higher accuracy for Res Net-18 and VGG-19 respectively. The detailed results are shown in Tab. 16 of the Appendix. Although we cannot scale to the size that Rare Gems (Sreenivasan et al., 2022) for Res Net-18, the sparsity that they achieve is unstructured, which most modern hardware cannot take advantage of. In contrast, our technique performs ordered structured sparsity compatibly with most computation targets. On the other hand, for VGG-19, we achieve +6.82pp higher accuracy at 43.6% of the footprint. Table 5: Ablation study for Res Net18 on CIFAR10 Variant Acc. (%) Rel. GMACs (Train.) Params. (M) MAESTRO 94.19 0.39 1.00 4.08 0.020 w/out GL 94.04 0.10 1.33 11.2 0.000 w/out PS 94.12 0.36 1.33 4.09 0.027 w/ full-training 94.05 0.32 1.97 4.09 0.032 Comparisons with quantized models. We also compare against XNOR-Net (Rastegari et al., 2016), which binarizes the network to achieve efficient inference. Training continues to happen in full precision, and inference performance is dependent on the operation implementation of the target hardware. Nonetheless, assuming a compression rate of 3.125%, for the same model size, we achieve +1.08pp (λgp = 512e 6) and +2.18pp (λgp = 256e 6) higher accuracy on Res Net-18 and VGG-19. 5.3. Training Behaviour of MAESTRO Having shown the relative performance of our framework to selected baselines, we now move to investigate how our method behaves with respect to its convergence and lowrank approximations. Model and rank convergence. In Fig. 4, we present the training dynamics for MAESTRO. Fig. 4a illustrates the evolution of total rank throughout the training steps. We observe that the ranks are pruned incrementally. This aligns with the observations made during Pufferfish (Wang et al., 2021) training, where the authors suggest warm-start training with full precision to enhance the final model performance. In our situation, we do not need to integrate this heuristic because MAESTRO automatically prunes rank. Fig. 4b reveals the ranks across layers after training. We notice an intriguing phenomenon: the ranks are nested for increasing λgl. This could imply apart from a natural order of ranks within each layer, a global order. We briefly examine this captivating occurrence in the following section, and we plan to investigate it more thoroughly in future work, as we believe this might contribute to a superior rank selection and sampling process. Lastly, Fig. 4c depicts the progression of training loss. We find that our hypothesis that sampling does not adversely impact training is also supported empirically. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition 1.0 1.5 2.0 2.5 3.0 3.5 4.0 MACS 1e8 Test Accuracy Maestro SVD (a) MAESTRO vs. SVD. 0.5 1.0 1.5 2.0 2.5 3.0 MACS 1e8 Test Accuracy Maestro (With Hierarchical Regularization) (b) Varying HGL. 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Test Accuracy Pruned Maestro Model Maestro models with varying (c) Nested MAESTRO. Figure 5: MAESTRO accuracy-latency trade-off under different settings for VGG19 on CIFAR10. Additional results in Appendix E.4. 5.4. Ablation Study In this section, we examine the impact of each component on the performance of MAESTRO. Specifically, we run variants of our method i) without the hierarchical group lasso regularization (HGL), ii) without progressive shrinking (PS). Additionally, we integrate iii) an extra full low-rank pass (b = ri) into the training at each step to assess whether extra sampling would be beneficial. The results are displayed in Tab. 5. As anticipated, our findings confirm that neither the inclusion of hierarchical group lasso with a tuned λgl nor progressive shrinking impair the final performance, but they do significantly enhance the efficiency of MAESTRO. Moreover, sampling more ranks at each training step does not improve the final performance, and, in fact, it hampers training efficiency, making it approximately twice as computationally demanding. 5.5. Accuracy-Latency Trade-Off at Training and Deployment Time In Fig. 5, we illustrate various approaches to balance latency (proxied through MACs operations) and accuracy in model training and deployment. Fig. 5a demonstrates how MAESTRO (λgl = 0) can be pruned effectively for deployment using the greedy search method discussed in Section 3.4. We contrast this with the greedy pruning of a non-factorized model that has been factorized using SVD. We reveal that this straightforward baseline does not measure up to the learned decomposition of MAESTRO and results in a significant performance decrease. Next, Fig. 5b portrays the final accuracy and the number of model parameters for varying hierarchical group lasso penalties. This leads to the optimal latency-accuracy balance for both training and inference. However, it is crucial to point out that each model was trained individually, while greedy pruning only necessitates a single training cycle. Lastly, we delve into the observation of nested ranks across increasing λgl. Fig. 5c displays the performance of MAESTRO (λgl = 0) across different ranks selected by smaller models MAESTRO (λgl > 0). Intriguingly, we observe that MAESTRO (λgl = 0) performs very well for instance, we can decrease its operations in half (and parameters by 10 ) and still maintain an accuracy of 87.7% without fine-tuning, just by reusing rank structure from independent runs. As aforementioned, we intend to further explore this in the future. 6. Conclusion and Future Work In this work, we have presented MAESTRO, a method for trainable low-rank approximation of DNNs that leverages progressive shrinking by applying a generalized variant of Ordered Dropout to the factorized weights. We have shown the theoretical guarantees of our work in the case of linear models and empirically demonstrated its performance across different types of models, datasets, and modalities. Our evaluation has demonstrated that MAESTRO outperforms competitive compression methods at a lower cost. In the future, we plan to expand our technique to encompass more advanced sampling techniques and apply it to different distributed learning scenarios, such as Federated Learning, where data are natively non-independent or identically distributed (non-IID). Impact Statement The goal of our work is to make the training and deployment of DNNs more efficient, affecting the total computation, memory and bandwidth of systems, as well as the energy they require to run the respective tasks. DNN model training requires significant amounts of energy, whether in a data center or at the edge (Wu et al., 2022; Patterson et al., 2022). However, such techniques should not be used in lieu of making data centers less green, but as a complementary measure to further reduce the carbon footprint of Deep Learning. Additionally, as our technique involves a training-aware methodology for progressively selecting ranks, it depends on the quality of data used in training. Deploying the model in the wild for various downstream tasks may result in behavior different from the intended outcomes. Therefore, it should be thoroughly tested before deployment to ensure it adheres to the required Service Level Objectives (SLOs), especially in performance-critical use cases, such as selfdriving vehicles or UAV navigation. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Alam, S., Liu, L., Yan, M., and Zhang, M. Fedrolex: Modelheterogeneous federated learning with rolling sub-model extraction. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id= Otxyys Ud BE. Alistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic, M. Qsgd: Communication-efficient sgd via gradient quantization and encoding. Advances in neural information processing systems, 30, 2017. Alistarh, D., Hoefler, T., Johansson, M., Khirirat, S., Konstantinov, N., and Renggli, C. The convergence of sparsified gradient methods. ar Xiv preprint ar Xiv:1809.10505, 2018. Almeida, M., Laskaridis, S., Mehrotra, A., Dudziak, L., Leontiadis, I., and Lane, N. D. Smart at what cost? characterising mobile deep neural networks in the wild. In Proceedings of the 21st ACM Internet Measurement Conference, pp. 658 672, 2021. Caldas, S., Koneˇcný, J., Mc Mahan, B., and Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements, 2019. URL https://openreview.net/ forum?id=SJlp M3Rq KQ. Carreira-Perpinán, M. A. and Idelbayev, Y. learning-compression algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8532 8541, 2018. Chen, T., Ji, B., Ding, T., Fang, B., Wang, G., Zhu, Z., Liang, L., Shi, Y., Yi, S., and Tu, X. Only train once: A one-shot neural network training and pruning framework. Advances in Neural Information Processing Systems, 34:19637 19651, 2021. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248 255. Ieee, 2009. Diao, E., Ding, J., and Tarokh, V. Hetero{fl}: Computation and communication efficient federated learning for heterogeneous clients. In International Conference on Learning Representations, 2021. URL https://openreview.net/ forum?id=TNk PBBYFk Xg. Dudziak, Ł., Abdelfattah, M. S., Vipperla, R., Laskaridis, S., and Lane, N. D. Shrinkml: End-to-end asr model compression using reinforcement learning. INTERSPEECH, 2019. Elliott, D., Frank, S., Sima an, K., and Specia, L. Multi30k: Multilingual english-german image descriptions. pp. 70 74, 2016. Frankle, J. and Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=r Jl-b3Rc F7. Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ar Xiv preprint ar Xiv:1510.00149, 2015. He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026 1034, 2015. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770 778, 2016. He, Y., Zhang, X., and Sun, J. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision, pp. 1389 1397, 2017. Horváth, S., Klein, A., Richtárik, P., and Archambeau, C. Hyperparameter transfer learning with adaptive complexity. In International Conference on Artificial Intelligence and Statistics, pp. 1378 1386. PMLR, 2021. Horváth, S., Laskaridis, S., Almeida, M., Leontiadis, I., Venieris, S., and Lane, N. Fj ORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout. Advances in Neural Information Processing Systems, 34:12876 12889, 2021. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameterefficient transfer learning for nlp. In International Conference on Machine Learning, pp. 2790 2799. PMLR, 2019. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. ar Xiv preprint ar Xiv:1704.04861, 2017. Hu, E., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models, 2021. Hu, H., Peng, R., Tai, Y.-W., and Tang, C.-K. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. ar Xiv preprint ar Xiv:1607.03250, 2016. Jaderberg, M., Vedaldi, A., and Zisserman, A. Speeding up convolutional neural networks with low rank expansions. ar Xiv preprint ar Xiv:1405.3866, 2014. Khodak, M., Tenenholtz, N., Mackey, L., and Fusi, N. Initialization and regularization of factorized neural layers. ar Xiv preprint ar Xiv:2105.01029, 2021. Kim, M., Yu, S., Kim, S., and Moon, S.-M. Depth FL : Depthwise federated learning for heterogeneous clients. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= pf8RIZTMU58. Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009. Laskaridis, S., Kouris, A., and Lane, N. D. Adaptive inference through early-exit networks: Design, challenges and directions. In Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning, pp. 1 6, 2021. Laskaridis, S., Venieris, S. I., Kouris, A., Li, R., and Lane, N. D. The future of consumer edge-ai computing. ar Xiv preprint ar Xiv:2210.10514, 2022. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Laskaridis, S., Kateveas, K., Minto, L., and Haddadi, H. Melting point: Mobile evaluation of language transformers. ar Xiv preprint ar Xiv:2403.12844, 2024. Le Cun, Y., Cortes, C., and Burges, C. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. ar Xiv preprint ar Xiv:1608.08710, 2016. Lim, M. and Hastie, T. Learning interactions via hierarchical grouplasso regularization. Journal of Computational and Graphical Statistics, 24(3):627 654, 2015. Lin, J., Zhu, L., Chen, W.-M., Wang, W.-C., Gan, C., and Han, S. On-device training under 256kb memory. In Annual Conference on Neural Information Processing Systems (Neur IPS), 2022. Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. Rethinking the value of network pruning. ar Xiv preprint ar Xiv:1810.05270, 2018. Liu, Z., Li, D., Fernandez-Marques, J., Laskaridis, S., Gao, Y., Dudziak, Ł., Li, S. Z., Hu, S. X., and Hospedales, T. Federated learning for inference at anytime and anywhere. ar Xiv preprint ar Xiv:2212.04084, 2022. Ma, S., Bassily, R., and Belkin, M. The power of interpolation: Understanding the effectiveness of sgd in modern overparametrized learning. In International Conference on Machine Learning, pp. 3325 3334. PMLR, 2018. Mc Mahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273 1282. PMLR, 2017. Mishkin, D. and Matas, J. All you need is a good init. ar Xiv preprint ar Xiv:1511.06422, 2015. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., De Vito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in pytorch. 2017. Patterson, D., Gonzalez, J., Hölzle, U., Le, Q., Liang, C., Munguia, L.-M., Rothchild, D., So, D. R., Texier, M., and Dean, J. The carbon footprint of machine learning training will plateau, then shrink. Computer, 55(7):18 28, 2022. Paul, M., Chen, F., Larsen, B. W., Frankle, J., Ganguli, S., and Dziugaite, G. K. Unmasking the lottery ticket hypothesis: What s encoded in a winning ticket s mask? In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= x Ss W2Am-uk Z. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748 8763. PMLR, 2021. Radford, A., Kim, J. W., Xu, T., Brockman, G., Mc Leavey, C., and Sutskever, I. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pp. 28492 28518. PMLR, 2023. Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. Xnor-net: Imagenet classification using binary convolutional neural networks. In Computer Vision ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11 14, 2016, Proceedings, Part IV, pp. 525 542. Springer, 2016. Rippel, O., Gelbart, M., and Adams, R. Learning Ordered Representations with Nested Dropout. In International Conference on Machine Learning (ICML), pp. 1746 1754, 2014. Sainath, T. N., Kingsbury, B., Sindhwani, V., Arisoy, E., and Ramabhadran, B. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6655 6659. IEEE, 2013. Schmidt, M. and Roux, N. L. Fast convergence of stochastic gradient descent under a strong growth condition. ar Xiv preprint ar Xiv:1308.6370, 2013. Seide, F., Fu, H., Droppo, J., Li, G., and Yu, D. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Fifteenth annual conference of the international speech communication association, 2014. Sidahmed, H., Xu, Z., Garg, A., Cao, Y., and Chen, M. Efficient and private federated learning with partially trainable networks. ar Xiv preprint ar Xiv:2110.03450, 2021. Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. Sreenivasan, K., yong Sohn, J., Yang, L., Grinde, M., Nagle, A., Wang, H., Xing, E., Lee, K., and Papailiopoulos, D. Rare gems: Finding lottery tickets at initialization. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https: //openreview.net/forum?id=Jpxd93u2v K-. Suresh, A. T., Felix, X. Y., Kumar, S., and Mc Mahan, H. B. Distributed mean estimation with limited communication. In International Conference on Machine Learning, pp. 3329 3337. PMLR, 2017. Tan, M. and Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105 6114. PMLR, 2019. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, pp. 5998 6008, 2017. Wan, Z., Wang, X., Liu, C., Alam, S., Zheng, Y., LIU, J., QU, Z., YAN, S., ZHU, Y., ZHANG, Q., et al. Efficient large language models: A survey. ar Xiv preprint ar Xiv:2312.03863, 2023. Wang, E., Davis, J. J., Zhao, R., Ng, H.-C., Niu, X., Luk, W., Cheung, P. Y., and Constantinides, G. A. Deep Neural Network Approximation for Custom Hardware: Where we ve been, where we re going. ACM Computing Surveys (CSUR), 52(2): 1 39, 2019. Wang, H., Agarwal, S., and Papailiopoulos, D. Pufferfish: communication-efficient models at no extra cost. Proceedings of Machine Learning and Systems, 3:365 386, 2021. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Wang, H., Agarwal, S., Tanaka, Y., Xing, E. P., Papailiopoulos, D., et al. Cuttlefish: Low-rank model training without all the tuning. ar Xiv preprint ar Xiv:2305.02538, 2023. Wen, W., Wu, C., Wang, Y., Chen, Y., and Li, H. Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29, 2016. Wiesler, S., Richard, A., Schlüter, R., and Ney, H. Meannormalized stochastic gradient for large-scale deep learning. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 180 184. IEEE, 2014. Wu, C.-J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C., et al. Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4:795 813, 2022. Wu, Z., Nagarajan, T., Kumar, A., Rennie, S., Davis, L. S., Grauman, K., and Feris, R. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Xue, J., Li, J., and Gong, Y. Restructuring of deep neural network acoustic models with singular value decomposition. In Interspeech, pp. 2365 2369, 2013. Yang, T.-J., Chen, Y.-H., and Sze, V. Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5687 5695, 2017. Ye, T. and Du, S. S. Global convergence of gradient descent for asymmetric low-rank matrix factorization. Advances in Neural Information Processing Systems, 34:1429 1439, 2021. Yu, J. and Huang, T. Autoslim: Towards one-shot architecture search for channel numbers. ar Xiv preprint ar Xiv:1903.11728, 2019a. Yu, J. and Huang, T. S. Universally slimmable networks and improved training techniques. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1803 1811, 2019b. Yu, J., Yang, L., Xu, N., Yang, J., and Huang, T. Slimmable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/ forum?id=H1g MCs Aq Y7. Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49 67, 2006. Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. ar Xiv preprint ar Xiv:1710.01878, 2017. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Appendix Contents of the Appendix A Limitations 13 B Extended Background 13 C Theoretical Properties of Low-Rank Layers 14 D Experimental setup 15 D.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 D.3 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 D.4 Deciding Against Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 E Extended evaluation 17 E.1 MAESTRO Recovers Correct Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 E.2 Rank Adaptivity of MAESTRO to Data Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 E.3 Training Behaviour of MAESTRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 E.4 Model Size-Accuracy Trade-Off at Training and Deployment Time . . . . . . . . . . . . . . . . . . . . . 19 E.5 Detailed Comparison with Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A. Limitations In this work, we have proposed a method for trainable low-rank approximation of DNNs that provides performance benefits for both training and inference times. While we suggest that this could have repercussions on the energy consumption of these tasks, we have not yet evaluated this hypothesis experimentally across different devices, be they data center-grade or at the edge. Additionally, we have applied our technique to CNN and Transformer models spanning across vision and NLP tasks. While we anticipate generalization to any type of network, it remains to be seen whether our techniques can also be applied to alternative types of layers, such as recurrent ones, and the benefits they may bring. Although we have provided a thorough investigation of the behaviour of our proposed system, the only way we can control the end footprint of the model during training is via the λgl and εps hyperparameters. However, there is no guarantee about the final footprint of the model. If we are willing to sacrifise accuracy, then the technique illustrated in Sec. 3.4 and evaluated in Sec. 5.5 is a start. More robust ways of globally ranking per-layer importances are left as future work. Lastly, our sampling method during training is uniform up to the maximum rank during progressive shrinking. Although this method has proven effective, alternative sampling methods could potentially accelerate rank exploration, thereby hastening the shrinking and convergence of the network during training. B. Extended Background Ordered Dropout. Ordered Dropout is a technique of importance-based, nested and ordered pruning that works along the indices of a layer s parameters (neurons, filters, etc.) Introduced by (Horváth et al., 2021), the authors describe a training technique where a layer s width is discretised in |P| values, where P = {s1, s2, ..., s|P |}, and at each training step, they sample p UP to get a specific subnetwork, extracted by selecting the first p Kl 1 neurons per layer and dropping the rest. In contrast to our work, sampling is happening directly on model parameters (rather than ranks) and is uniform across layers (i.e. a single p-value is set). Nested-ness refers to the fact that larger p-value models include the parameters of lower p-values and importance-based pruning means that via stochastic sampling, the right-most (in terms of index) parameters train on progressively less data due to the probability of sampling and nestedness (i.e. all data pass from the parameters of minimal subnetwork, less pass the higher the p-value). Maestro: Uncovering Low-Rank Structures via Trainable Decomposition C. Theoretical Properties of Low-Rank Layers In this section, we show that for the case of linear mappings, i.e., the problem formulation discussed in (3), MAESTRO acts as PCA applied to the original dataset X projected onto the space weighted by the corresponding singular values. Before proceeding with the theorem, we first recall the assumptions and notations introduced in the main paper. We denote C:b as the first b columns of matrix C, C:a,:b denotes the first a rows, and b columns of a matrix C, a+1 : denotes the all the columns/rows from index a + 1, : denotes the all the columns/rows, and for vectors, we use a single subscript. As discussed in the main paper, we reformulate the original least squares problems to the following decomposition problem min U Rm r,V Rn r Ex,y X h Eb D h U:b V :b x y 2ii , (6) where D is a distribution that samples b {1, 2, . . . , r} with probability pb > 0 and we assume that y is linked with x through linear map A, i.e., y = Ax. Theorem C.1. Let A = U Σ V be a SVD decomposition of A. Then, the minimization problem (6) is equivalent to PCA applied to the transformed dataset x Σ V x, x X projected on the column space of U. Concretely, we can first solve min U Rm r,V Rn r Ez X U:b V :b I Σ V x 2 , (7) and then we can obtain the solutions of (6) using U = U U, V = V V , where U, V belong to the set of optimal solutions of problem (7).zx In the particular case, where X is a uniform distribution on the unit ball, (6) recovers the best rank approximation of A across all ranks, i.e., up to the scale of U and V recovers truncated SVD. In the case, A is identity, (6) leads to standard PCA decomposition. Proof. From the assumptions that y = Ax and A = U Σ V , we can rewrite (6) as min U Rm r,V Rn r Ex X U:b V :b U Σ V x 2 . Since U is orthogonal, we have z = U z . Therefore, the above problem is equivalent to min U Rm r,V Rn r Ex X U U:b V :b Σ V x 2 , which is also equivalent to min U Rm r,V Rn r Ex X U:b V :b Σ V x 2 after reparametrization. The next step involves injecting identity in the form V V as that leads to the equivalent reformulation min U Rm r,V Rn r Ex X U:b V :b V Σ V x 2 . As for the previous case, we can reparametrise the problem to obtain min U Rm r,V Rn r Ex X U:b V :b Σ V x 2 . Let k = rank( Σ) = rank(A) r and z = V x. Furthermore, let g = Σz for any z Rn, then gk+1: = 0. This, combined with the nested structure of the optimization problem, implies that the optimal solution for U has to be of the form ui,k+1: = 0 for all interesting (non-zero mapping) directions, i.e., there exists x X such that v i V x = 0. These are the only interesting solutions since the case where for all x X : v i V x = 0 yields zero mapping on X, which is not Maestro: Uncovering Low-Rank Structures via Trainable Decomposition of interest and could be dropped, e.g., using group lasso penalty discussed in the main part. Therefore, to solve the original problem, we could first solve the following problem min U Rk r,V Rn r Ez X U:k,:b V :b Σ:k,: z 2 and then reconstruct the corresponding solution of the original problem by appending zeros to the resulting matrix U. By a similar argument, we can argue that for all non-zero mapping directions, it has to be the case that vi,k+1: = 0. Therefore, solving the original minimization reduces to min U Rk r,V Rk r Ez X U:b V :b Σ:k,:k z:k 2 . This can be further simplified using reparametrization V V Σ 1 :k,:k, which leads to min U Rk r,V Rk r Ez X U:b V :b Ik Σ:k,:kz:k 2 , (8) where Ik is k k identity. If X is centred around zero, then Σ:k,:kz:k is also centred around zero, and the above problem is up to scaling equivalent to PCA of Σ:k,:kz:k as shown by Rippel et al. (Rippel et al., 2014). Since Σ is a diagonal matrix with only k k non-zero upper left sub-matrix, therefore, PCA on Σ:k,:kz:k is equivalent to PCA on Σz by appending zeros to the obtained principal component vectors. Thus, we can write an equivalent formulation min U Rm r,V Rn r Ez X U:b V :b I Σ V x 2 . Furthermore, let U, V belong to the set of optimal solutions of problem (7). Then U = U U, V = V V belong to the set of optimal solutions of problem (6). This can be proved by reversing our construction and ignoring scaling since (7) is scaling invariant. For the case X is a uniform distribution on the unit ball, we have Σ:k,:kz:k is a k-dimensional ellipsoid with principal axes being standard basis vectors {ei}k i=1, where the length of the axes is given by ordered singular values, i.e., the first basis vector corresponds to the largest singular vector. Therefore, its principal component vectors correspond to the basis vectors. Following our construction, one can see that the solution to the original problems leads to truncated SVD up to the scaling factor. For the case A is an identity, we have k = r = m = m, Σ is an identity, and U = V . Under this setting, the principal component vectors obtained from (8) corresponds to principal component vectors of X in basis given by columns of U. Similarly, as in the previous case, reversing the transformations to return back to the original problem, we conclude that the optimal solution of the original problem corresponds to principal component vectors of X since we reverse the transformation by U . D. Experimental setup D.1. Datasets MNIST. The MNIST dataset (Le Cun et al., 2010) is a database of 28 28 greyscale handwritten digits, with a training set of 60k examples and a test set of 10k samples. CIFAR-10. The CIFAR10 dataset (Krizhevsky et al., 2009) is a computer vision dataset that consists of 32 32 RGB images classified into 10 labels. It is split into 50k training images and 10k test images which are balanced across labels. Image Net-1k. The Image Net dataset (ILSVRC) (Deng et al., 2009) is an image classification challenge. The task comprises to classify an 300 300 RGB image among 1000 classes. In total there are 1.2M training samples and 50k test images. WMT16. The WMT dataset from statmt is machine translation dataset, spanning news commentaries and parliament proceedings, that aims to investigate the applicability of machine translation techniques when translating between language pairs. Specifically, we focus on the task of German-English language translation of image descriptions, commonly referred to as Multi30k (Elliott et al., 2016). We only utilise the text modality for the translation task. Data is taken straight from torchtext. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition D.2. Models Le Net. Le Net is a simple convolutional network, introduced by Le Cun at al. for recognizing handwritten digits (Le Cun et al., 2010). It consists of a sequence of two convolutional layers, followed by three fully-connected layers. However, we are using a Re LU instead of the initially proposed sigmoid activation. The detailed architecture of the network is depicted in Tab. 6 Res Net. Res Net (He et al., 2016) is a deep neural network whose prominent feature is the existence of skip (or residual) connections, that is connections that perform identity mappings merged with the target layer it joins with through summation. Multiple residual blocks are stacked to form the network. The result is an easier to optimise network that offers enhanced accuracy. We use Res Net-18 in our experiments, the architecture of which is depicted in Tab. 7, except for Image Net, where we use Res Net-50. Table 6: Detailed architecture of the Le Net-5 architecture used in our experiments. Each convolution and linear layer is followed by a Re LU activation that is ommitted from the table. The shapes for convolution layers follows (m, n, k, k). Parameter Shape Layer hyper-parameter layer1.conv1.weight 1 6 5 5 stride:1;padding:1 pooling.max N/A kernel size:2;stride:1;dilation:1 layer2.conv2.weight 6 16 5 5 stride:1;padding:0;dilation:1 pooling.max N/A kernel size:2;stride:2 layer3.fc1.weight 256 120 N/A layer4.fc2.weight 120 84 N/A layer5.fc3.weight 84 10 N/A Table 7: The hybrid Res Net architecture for the CIFAR-10 and Image Net datasets used in the experiments. Layer Name Res Net-18 Res Net-50 conv1 3 3, 64, stride 1, padding 1 7 7, 64, stride 2, padding 1 3 3 maxpool, stride 2 3 3, 64 3 3, 64 " 1 1, 64 3 3, 64 1 1, 256 conv3_x 3 3, 128 3 3, 128 " 1 1, 128 3 3, 128 1 1, 512 conv4_x 3 3, 256 3 3, 256 " 1 1, 256 3 3, 256 1 1, 1024 conv5_x 3 3, 512 3 3, 512 " 1 1, 512 3 3, 512 1 1, 2048 Avg Pool, 10-dim FC, Soft Max Avg Pool, 20-dim FC, Soft Max VGG. VGG (Simonyan & Zisserman, 2015) is a also a convolutional network that leverages smaller 3 3 convolutions that enables deeper architecture than before. For our experiments we are using VGG-19, the architecture of which is depicted in Tab. 8. Table 8: Detailed architecture of the VGG-19 architecture used in our experiments. There is a Batch Norm layer followed by a Re LU activation (omitted in the table) after each convolution layer. The shapes for convolution layers follows (m, n, k, k). Parameter Shape Layer hyper-parameter layer1.conv1.weight 3 64 3 3 stride:1;padding:1 layer2.conv2.weight 64 64 3 3 stride:1;padding:1 pooling.max N/A kernel size:2;stride:2 layer3.conv3.weight 64 128 3 3 stride:1;padding:1 layer4.conv4.weight 128 128 3 3 stride:1;padding:1 pooling.max N/A kernel size:2;stride:2 layer5.conv5.weight 128 256 3 3 stride:1;padding:1 layer6.conv6.weight 256 256 3 3 stride:1;padding:1 layer7.conv7.weight 256 256 3 3 stride:1;padding:1 layer8.conv8.weight 256 256 3 3 stride:1;padding:1 pooling.max N/A kernel size:2;stride:2 layer9.conv9.weight 256 512 3 3 stride:1;padding:1 layer10.conv10.weight 512 512 3 3 stride:1;padding:1 layer11.conv11.weight 512 512 3 3 stride:1;padding:1 layer12.conv12.weight 512 512 3 3 stride:1;padding:1 pooling.max N/A kernel size:2;stride:2 layer13.conv13.weight 512 512 3 3 stride:1;padding:1 layer14.conv14.weight 512 512 3 3 stride:1;padding:1 layer15.conv15.weight 512 512 3 3 stride:1;padding:1 layer16.conv16.weight 512 512 3 3 stride:1;padding:1 pooling.avg N/A kernel size:2 classifier.weight 512 10 N/A classifier.bias 10 N/A Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Transformers. The transformer architecture (Vaswani et al., 2017) has been lately revolutionising deep learning. Based on the notion of self-attention, for each input token, it produces a weighted combination of other relevant tokens weighed by the attention weight. Each attention unit has three weight matrices, namely WQ, WK, WV , for query, key and value weights respectively producing the equivalent vectors. Attention is defined as the scaled dot product between key and query. For our translation task, we use the architecture depicted in Tab. 10. Table 9: Detailed information of the encoder layer in the Transformer architecture in our experiment Parameter Shape Hyper-param. embedding 9521 512 padding index: 1 positional encoding N/A N/A dropout N/A p = 0.1 encoder.self-attention.wq(W Q) 512 512 N/A encoder.self-attention.wk(W K) 512 512 N/A encoder.self-attention.wv(W V ) 512 512 N/A encoder.self-attention.wo(W O) 512 512 N/A encoder.self-attention.dropout N/A p = 0.1 encoder.self-attention.layernorm 512 ε = 10 6 encoder.ffn.layer1 512 2048 N/A encoder.ffn.layer2 2048 512 N/A encoder.layernorm 512 ε = 10 6 dropout N/A p = 0.1 Table 10: Detailed information of the decoder layer in the Transformer architecture in our experiment Parameter Shape Hyper-param. embedding 9521 512 padding index: 1 positional encoding N/A N/A dropout N/A p = 0.1 decoder.self-attention.wq(W Q) 512 512 N/A decoder.self-attention.wk(W K) 512 512 N/A decoder.self-attention.wv(W V ) 512 512 N/A decoder.self-attention.wo(W O) 512 512 N/A decoder.self-attention.dropout N/A p = 0.1 decoder.self-attention.layernorm 512 ε = 10 6 decoder.enc-attention.wq(W Q) 512 512 N/A decoder.enc-attention.wk(W K) 512 512 N/A decoder.enc-attention.wv(W V ) 512 512 N/A decoder.enc-attention.wo(W O) 512 512 N/A decoder.enc-attention.dropout N/A p = 0.1 decoder.enc-attention.layernorm 512 ε = 10 6 decoder.ffn.layer1 512 2048 N/A decoder.ffn.layer2 2048 512 N/A encoder.layernorm 512 ε = 10 6 dropout N/A p = 0.1 D.3. Hyperparameter Selection Le Net. We use a standard configuration that is commonly employed for training Le Net models a step size of 0.01, a momentum of 0.9, and no weight decay. We train for a total of 20 epochs. VGG and Res Net-18. Similarly, we use a standard configuration that is commonly employed for training VGG and Res Net-18 models a step size of 0.01, a momentum of 0.9, weight decay of 1e 4, and a learning schedule with step size reductions by a factor of 10 at epochs 150 and 250. We train for a total of 300 epochs. Res Net-50. Similarly, we use a standard configuration that is commonly employed for training Res Net-50 models a step size of 0.01, a momentum of 0.9, weight decay of 1e 4, and a learning schedule with step size reductions by a factor of 10 at epochs 30 and 60. We train for a total of 90 epochs. Transformers. For the Transformer model, we use the Adam optimizer with an initial learning rate at 0.001, βs = (0.9, 0.98), ε = 10 8 batch size at 256. We also conduct gradient norm clipping with norm bound at 0.25. The entire training takes 400 epochs. For the vanilla warm-up training, we use warm-up epoch Ewu = 10. We enable label smoothing, weight sharing for the source and target word embedding, and weight sharing between target word embedding and the last dense layer. The learning rate schedule follows directly from the one proposed (Vaswani et al., 2017). D.4. Deciding Against Decomposition During inference, if the rank of a given layer is so large that keeping it as a non-decomposed layer is more efficient, we opt not to decompose that particular layer. E. Extended evaluation E.1. MAESTRO Recovers Correct Ordering In the main text, we pointed out that SVD fails to consider data. Indeed, even in the case of linear NN, the acquired singular vectors may exhibit incorrect ordering. To illustrate this problem, we provide a simple example in which we use a matrix A Maestro: Uncovering Low-Rank Structures via Trainable Decomposition with a rank of 3. We organize the dataset X such that the third singular vector has the highest importance, followed by the second and then the first singular vector in decreasing order of significance. It is clear that SVD doesn t consider the data, and as a result, it cannot comprehend this behavior. Below (in Fig. 6), we demonstrate how MAESTRO is able to correctly discern the order. 0 1000 2000 3000 4000 5000 6000 7000 Iterations Singular Values k = ... (p = ...) 1 (0.2) 2 (0.3) 3 (0.5) 4 (0.7) 5 (0.8) 6 (1.0) Figure 6: Verification that MAESTRO recovers correct order of importance. Target mapping is of rank 3, and the dataset is constructed in such a way that the singular vectors have reversed the order of importance. p and k stand for relative and actual rank, respectively. E.2. Rank Adaptivity of MAESTRO to Data Complexity So far, we have found that different models can have different ranks on different datasets. However, we did not reach the conclusion that more complex tasks lead to higher ranks because the model architecture is not invariant, i.e., we cannot compare ranks between layers of different dimensionality. Figure 7: MAESTRO adaptivity when PCA dimensionality drops during training. The plot displays the estimates of singular values. The data distribution has initially 3 directions. It is expected that the top 3 ranks will converge to value one and the rest to zero. After removing one direction, ranks drop to 2, as the data complexity changes. p and k stand for relative and actual rank, respectively. To test this hypothesis in silo, we designed a simplified experiment on a linear autoencoder example with the same setup as considered in Fig. 2b). To showcase the adaptivity of MAESTRO to changing data, we start by training with data that have intrinsic dimension 3. In the middle of the training (iteration 1250/2500), we removed one dimension using projection, thus simplifying the data. In the resulting graph (Fig. 7), we see that while the initial rank had converged to 3, it now drops to 2 as the data complexity changes. For completeness, this adaptivity is further showcased in Appendix E.1, where we have illustrated how SVD fails to consider data-centric factors, whereas Maestro recovers the correct order of importance. E.3. Training Behaviour of MAESTRO For completeness, we also include an extended version of Fig. 4 from the main paper, where we presented the training dynamics for MAESTRO. Fig. 8, 9 and 10 present similar plots, but across both MNIST and CIFAR-10. Specifically, Fig. 8 illustrates the evolution of total rank throughout the training steps. We observe that the ranks are pruned incrementally. This Maestro: Uncovering Low-Rank Structures via Trainable Decomposition aligns with the observations made during Pufferfish (Wang et al., 2021) training, where the authors suggested warm-start training with full precision to enhance the final model performance. In our case, the necessity to implement this heuristic is avoided, as MAESTRO prunes rank automatically. Fig. 9 demonstrates the ranks across layers post-training. An intriguing trend is observed: the ranks are nested for increasing λgl, suggesting a potential inherent ordering of ranks not only within each layer but also possibly a global one. We provide a preliminary exploration of this fascinating pattern in the subsequent section and intend to probe it more deeply in future studies. We believe this may enhance the rank selection and sampling process. Finally, Fig. 10 portrays the evolution of the training loss. Our premise that sampling does not negatively affect training is validated by empirical performance. 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Epochs Maestro: = 4e-05 Maestro: = 8e-05 Maestro: = 0.00016 Maestro: = 0.00032 Maestro: = 0.00064 Maestro: = 0.00128 Maestro: = 0.00256 (a) Le Net on MNIST 0 50 100 150 200 250 300 Epochs Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 Maestro: = 0.001024 (b) Res Net-18 on CIFAR10 0 50 100 150 200 250 300 Epochs Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 (c) VGG19 on CIFAR10 Figure 8: Total rank (Pd i=1 ri) progression during training. 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Layer Index Full Rank Maestro: = 4e-05 Maestro: = 8e-05 Maestro: = 0.00016 Maestro: = 0.00032 Maestro: = 0.00064 Maestro: = 0.00128 Maestro: = 0.00256 (a) Le Net on MNIST 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Layer Index Full Rank Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 Maestro: = 0.001024 (b) Res Net-18 on CIFAR10 2 4 6 8 10 12 14 16 Layer Index Full Rank Maestro: = 4e-06 Maestro: = 8e-06 Maestro: = 1.6e-05 Maestro: = 3.2e-05 Maestro: = 6.4e-05 Maestro: = 0.000128 Maestro: = 0.000256 Maestro: = 0.000512 (c) VGG19 on CIFAR10 Figure 9: Ranks ri s across different layers after training. E.4. Model Size-Accuracy Trade-Off at Training and Deployment Time In addition to the original illustrations, we present an extended interpretation of Fig. 5, where we depict diverse strategies to maintain a balance between model size and accuracy in the process of model training and deployment. In Fig. 11, we demonstrate the effective pruning of MAESTRO (λgl = 0) for deployment, utilizing the greedy search methodology discussed in Section 3.4. This is juxtaposed with the greedy pruning of a model not originally factorized but later factorized through SVD. Our findings reveal that this straightforward baseline does not match the performance of MAESTRO s learned decomposition, leading to a considerable performance drop. Subsequently, Fig. 12 displays the end accuracy and the count of model parameters corresponding to various hierarchical group lasso penalties. This results in an optimal compromise between latency and accuracy for both the training and inference stages. It s worth noting, though, that each model was trained separately, in contrast to greedy pruning, which demands just a single training round. Additionally, we scrutinize the training expense for each model illustrated in Fig. 12, the results of which are exhibited in Tables 11, 12, 13, 14 and 15, where we display and the final accuracy of the model, MACs and the number of parameters for inference, and relative total training cost in terms of the number of model parameters and MACs compared to the non-factorized model. Interestingly, smaller models are not only advantageous in terms of inference efficiency, but they can also be trained at a small portion of the cost required by full-rank models. On the downside, the smallest models cause a non-negligible reduction in performance. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Epochs (a) Le Net on MNIST 0 50 100 150 200 250 300 Epochs (b) Res Net-18 on CIFAR10 0 50 100 150 200 250 300 Epochs (c) VGG19 on CIFAR10 Figure 10: Convergence of MAESTRO with λgl = 0. Number of Parameters Test Accuracy Maestro SVD (a) Le Net on MNIST 0.5 0.6 0.7 0.8 0.9 1.0 1.1 Number of Parameters 1e7 Test Accuracy Maestro SVD (b) Res Net-18 on CIFAR10 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Number of Parameters 1e7 Test Accuracy Maestro SVD (c) VGG19 on CIFAR10 Figure 11: Accuracy-latency trade-off comparing MAESTRO (λgl=0) and SVD. Lastly, we delve deeper into the observation of nested ranks with increasing λgl. Fig. 13 outlines the performance of MAESTRO (λgl = 0) across various ranks chosen by smaller models MAESTRO (λgl > 0). We observe that MAESTRO (λgl = 0) delivers impressive results for example, we can reduce its parameters by 10x for VGG while preserving an accuracy of 87.7% without any fine-tuning simply by leveraging rank structure from separate runs. For Le Net, a reduction in model size by a factor of three is achievable without sacrificing accuracy. Last, for Res Net-18 the reduction is 1.7 . As highlighted earlier, we aim to delve deeper into this subject in future studies. E.5. Detailed Comparison with Baselines Tab. 16 presents the details of MAESTRO s performance compared to the selected baselines leveraging pruning, quantization and low-rank techniques presented in Sec. 5.2 for CIFAR-10. These numbers along with the operating points from Tab. 12 and 13 are illustrated in Fig. 3. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Number of Parameters Test Accuracy Maestro (With Hierarchical Regularization) (a) Le Net on MNIST 0.0 0.2 0.4 0.6 0.8 1.0 Number of Parameters 1e7 Test Accuracy Maestro (With Hierarchical Regularization) (b) Res Net-18 on CIFAR10 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Number of Parameters 1e7 Test Accuracy Maestro (With Hierarchical Regularization) (c) VGG19 on CIFAR10 Figure 12: Impact of hierarchical group lasso on the accuracy-memory trade-off. Exact values are provided in Tables 11, 12 and 13, respectively. Table 11: Le Net performance on MNIST for different regularization parameters. The last column in the table displays the relative total training cost in terms of the number of Multiply-Accumulate operations (MACs) and model parameters, compared to the non-factorized model. Variant Acc. (%) MACs (Inf.) Params. (Inf.) Rel. MACs / Params. (Train.) Non-Factorized 98.99 0.09 281640 0 (1.00 ) 44426 0 (1.00 ) 1.00 / 1.00 MAESTRO (λgp = 0.) 99.06 0.09 281640 0 (1.00 ) 44426 0(1.00 ) 1.14 / 1.49 MAESTRO (λgp = 8e 5) 98.91 0.09 268577 389 (0.95 ) 31363 0 (0.71 ) 1.08 / 1.14 MAESTRO (λgp = 16e 5) 98.92 0.05 255369 217 (0.91 ) 44426 217 (0.41 ) 1.06 / 0.80 MAESTRO (λgp = 32e 5) 98.31 0.39 237084 6268 (0.84 ) 18155 271 (0.26 ) 0.93 / 0.53 MAESTRO (λgp = 64e 5) 98.20 0.49 178165 19098 (0.63 ) 7996 662 (0.18 ) 0.77 / 0.33 MAESTRO (λgp = 128e 5) 97.92 0.22 131789 8965 (0.47 ) 6375 77 (0.14 ) 0.54 / 0.21 MAESTRO (λgp = 256e 5) 96.65 0.14 99969 6252 (0.35 ) 5293 214 (0.12 ) 0.39 / 0.14 Table 12: Res Net-18 performance on CIFAR10 for different regularization parameters. The last column in the table displays the relative total training cost in terms of the number of Multiply-Accumulate operations (MACs) and model parameters, compared to the non-factorized model. Variant Acc. (%) GMACs (Inf.) Params. (M) (Inf.) Rel. MACs / Params. (Train.) Non-Factorized 93.86 0.20 0.56 0 (1.00 ) 11.2 0 (1.00 ) 1.00 / 1.00 MAESTRO (λgp = 0.) 94.04 0.10 0.56 0 (1.00 ) 11.2 0 (1.00 ) 1.10 / 1.13 MAESTRO (λgp = 4e 6) 94.22 0.16 0.55 0.0047 (1.00 ) 11.1 0.030 (0.99 ) 1.09 / 1.10 MAESTRO (λgp = 8e 6) 94.09 0.01 0.49 0.0002 (0.89 ) 7.41 0.004 (0.66 ) 1.00 / 0.85 MAESTRO (λgp = 16e 6) 94.19 0.07 0.39 0.0008 (0.70 ) 4.08 0.020 (0.37 ) 0.83 / 0.58 MAESTRO (λgp = 32e 6) 93.97 0.25 0.25 0.0013 (0.45 ) 2.19 0.007 (0.20 ) 0.60 / 0.36 MAESTRO (λgp = 64e 6) 93.86 0.11 0.15 0.0006 (0.27 ) 1.23 0.004 (0.11 ) 0.39 / 0.22 MAESTRO (λgp = 128e 6) 93.37 0.07 0.094 0.0006 (0.17 ) 0.79 0.009 (0.07 ) 0.25 / 0.13 MAESTRO (λgp = 256e 6) 92.48 0.04 0.064 0.0002 (0.12 ) 0.54 0.006 (0.05 ) 0.16 / 0.08 MAESTRO (λgp = 512e 6) 91.14 0.16 0.044 0.0004 (0.08 ) 0.37 0.007 (0.03 ) 0.11 / 0.05 MAESTRO (λgp = 1024e 6) 89.55 0.30 0.032 0.0002 (0.06 ) 0.27 0.007 (0.02 ) 0.07 / 0.03 Table 13: VGG19 performance on CIFAR10 for different regularization parameters. The last column in the table displays the relative total training cost in terms of the number of Multiply-Accumulate operations (MACs) and model parameters, compared to the non-factorized model. Variant Acc. (%) GMACs (Inf.) Params. (M) (Inf.) Rel. MACs / Params. (Train.) Non-Factorized 92.94 0.17 0.40 0 (1.00 ) 20 0 (1.00 ) 1.00 / 1.00 MAESTRO (λgp = 0.) 93.06 0.17 0.40 0 (1.00 ) 20 0 (1.00 ) 1.10 / 1.12 MAESTRO (λgp = 4e 6) 93.33 0.08 0.39 0.0017 (0.97 ) 18.8 0 (0.94 ) 1.06 / 1.04 MAESTRO (λgp = 8e 6) 93.27 0.33 0.30 0.0017 (0.76 ) 9.91 0.008 (0.49 ) 0.90 / 0.73 MAESTRO (λgp = 16e 6) 93.13 0.07 0.21 0.0014 (0.53 ) 4.66 0.052 (0.23 ) 0.69 / 0.46 MAESTRO (λgp = 32e 6) 93.10 0.10 0.13 0.0009 (0.33 ) 2.20 0.025 (0.11 ) 0.47 / 0.27 MAESTRO (λgp = 64e 6) 92.70 0.34 0.08 0.0005 (0.20 ) 1.17 0.010 (0.06 ) 0.30 / 0.16 MAESTRO (λgp = 128e 6) 92.34 0.12 0.05 0.0005 (0.13 ) 0.72 0.002 (0.04 ) 0.19 / 0.09 MAESTRO (λgp = 256e 6) 91.12 0.19 0.04 0.0007 (0.09 ) 0.50 0.023 (0.02 ) 0.12 / 0.05 MAESTRO (λgp = 512e 6) 88.53 0.13 0.03 0.0003 (0.06 ) 0.35 0.003 (0.02 ) 0.08 / 0.03 Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Table 14: Transformer performance on Multi30k for different regularization parameters. The last column in the table displays the relative total training cost in terms of the number of Multiply-Accumulate operations (MACs) and model parameters, compared to the non-factorized model. Variant Acc. (%) Ppl. GMACs (Inf.) Params. (M) (Inf.) Rel. MACs / Params. (Train.) Non-Factorized 65.33 1.13 9.85 0.10 1.370 0.0000 (1.00 ) 53.9 0.000 (1.00 ) 1.00 / 1.00 MAESTRO (λgp = 0.32) 61.30 0.26 12.99 0.31 1.125 0.0030 (0.82 ) 45.1 0.101 (0.84 ) 1.03 / 1.14 MAESTRO (λgp = 0.64) 63.78 0.14 9.37 0.32 0.957 0.0112 (0.70 ) 39.1 0.413 (0.73 ) 0.95 / 1.05 MAESTRO (λgp = 1.28) 66.14 0.08 7.02 0.17 0.570 0.0088 (0.42 ) 25.3 0.315 (0.47 ) 0.75 / 0.86 MAESTRO (λgp = 2.56) 66.08 0.09 6.90 0.07 0.248 0.0032 (0.18 ) 13.8 0.113 (0.26 ) 0.47 / 0.58 MAESTRO (λgp = 5.12) 57.70 0.13 13.97 0.43 0.123 0.0002 (0.9 ) 9.3 0.001 (0.17 ) 0.28 / 0.39 Table 15: Res Net50 performance on Image Net-1k for different regularization parameters. The last column in the table displays the relative total training cost in terms of the number of Multiply-Accumulate operations (MACs) and model parameters, compared to the non-factorized model. Variant Acc. (%) GMACs (Inf.) Params. (M) (Inf.) Rel. MACs / Params. (Train) No decomposition Non-Factorized 76.00 4.12 (1.00 ) 25.56 (1.00 ) 1.00 / 1.00 Not decomposing first four blocks and last layer MAESTRO (λgp = 2e 6) 76.04 3.43 (0.83 ) 14.02 (0.55 ) 0.87 / 0.64 MAESTRO (λgp = 4e 6) 75.74 3.39 (0.82 ) 13.11 (0.51 ) 0.85 / 0.59 MAESTRO (λgp = 8e 6) 75.15 3.21 (0.78 ) 11.46 (0.45 ) 0.83 / 0.55 Decomposing all layers MAESTRO (λgp = 0.) 72.82 4.12 (1.00 ) 25.56 (1.00 ) 1.22 / 1.24 MAESTRO (λgp = 1e 6) 72.81 3.62 (0.88 ) 18.77 (0.73 ) 1.00 / 0.87 MAESTRO (λgp = 2e 6) 72.07 2.66 (0.65 ) 11.54 (0.45 ) 0.76 / 0.59 MAESTRO (λgp = 4e 6) 71.54 2.01 (0.49 ) 9.21 (0.36 ) 0.57 / 0.57 MAESTRO (λgp = 8e 6) 71.02 1.69 (0.41 ) 7.21 (0.28 ) 0.50 / 0.39 5000 10000 15000 20000 25000 30000 Number of Parameters Test Accuracy Pruned Maestro Model Maestro models with varying (a) Le Net on MNIST 0 1 2 3 4 5 6 7 Number of Parameters 1e6 Test Accuracy Pruned Maestro Model Maestro models with varying (b) Res Net-18 on CIFAR10 0.0 0.2 0.4 0.6 0.8 1.0 Number of Parameters 1e7 Test Accuracy Pruned Maestro Model Maestro models with varying (c) VGG19 on CIFAR10 Figure 13: MAESTRO with progressive pruning to showcase nested rank importance structure. The original model corresponds to an evaluation in Fig. 12, and pruned models are based on MAESTRO with λgl = 0, and they are pruned using the same ranks as selected by MAESTRO with λgl > 0. Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Table 16: Maestro vs. baselines on CIFAR10. Variant Model Acc. (%) GMACs Params. (M) Non-factorized Res Net-18 93.86 0.20 0.56 11.17 Pufferfish Res Net-18 94.17 0.22 3.336 Cuttlefish Res Net-18 93.47 0.3 3.108 IMP Res Net-18 92.12 - 0.154 Rare Gems Res Net-18 92.83 - 0.076 XNOR-Net Res Net-18 90.06 - 0.349 MAESTRO Res Net-18 94.19 0.07 0.39 0.00 4.08 0.02 (λgp = 16e 6) MAESTRO Res Net-18 93.86 0.11 0.15 0.00 1.23 0.00 (λgp = 64e 6) Non-factorized VGG-19 92.94 0.17 0.40 20.56 Pufferfish VGG-19 92.69 0.29 8.37 Cuttlefish VGG-19 93.39 0.15 2.36 Rare Gems VGG-19 86.28 - 5.04 IMP VGG-19 92.86 - 5.04 XNOR-Net VGG-19 88.94 - 0.64 Spectral Init. VGG-19 83.27 - 0.4 MAESTRO VGG-19 93.10 0.10 0.13 0.00 2.20 0.03 (λgp = 32e 6) MAESTRO VGG-19 88.53 0.13 0.03 0.00 0.35 0.00 (λgp = 512e 6) Results from original work; : XNOR-Net employs binary weights and activations; although the overall #trainable parameters remain the same as the vanilla network, each model weight is quantized from 32-bit to 1-bit. Therefore, we report a compression rate of 3.125%(1/32).