# clusterspecific_predictions_with_multitask_gaussian_processes__b7e9a412.pdf Journal of Machine Learning Research 24 (2023) 1-49 Submitted 11/20; Revised 7/22; Published 1/23 Cluster-Specific Predictions with Multi-Task Gaussian Processes Arthur Leroy arthur.leroy.pro@gmail.com Department of Computer Science, The University of Manchester, Manchester, United Kingdom Pierre Latouche pierre.latouche@u-paris.fr Universit e Paris Cit e, CNRS, MAP5 UMR 8145 F-75006 Paris, France; Mines Paris Tech, Centre de G eosciences, PSL Research University, F-77300 Fontainebleau, France; Universit e Clermont Auvergne, CNRS, LMBP, UMR 6620, Aubi ere, France Benjamin Guedj benjamin.guedj@inria.fr Centre for Artificial Intelligence and Department of Computer Science, University College London & Inria London, United Kingdom and France Servane Gey servane.gey@u-paris.fr Universit e Paris Cit e, CNRS, MAP5 UMR 8145 F-75006 Paris, France Editor: David Wipf A model involving Gaussian processes (GPs) is introduced to simultaneously handle multitask learning, clustering, and prediction for multiple functional data. This procedure acts as a model-based clustering method for functional data as well as a learning step for subsequent predictions for new tasks. The model is instantiated as a mixture of multi-task GPs with common mean processes. A variational EM algorithm is derived for dealing with the optimisation of the hyper-parameters along with the hyper-posteriors estimation of latent variables and processes. We establish explicit formulas for integrating the mean processes and the latent clustering variables within a predictive distribution, accounting for uncertainty in both aspects. This distribution is defined as a mixture of cluster-specific GP predictions, which enhances the performance when dealing with group-structured data. The model handles irregular grids of observations and offers different hypotheses on the covariance structure for sharing additional information across tasks. The performances on both clustering and prediction tasks are assessed through various simulated scenarios and real data sets. The overall algorithm, called Magma Clust, is publicly available as an R package. Keywords: Gaussian processes mixture, curve clustering, multi-task learning, variational EM, cluster-specific predictions 1. Introduction The classic context of regression aims at inferring the underlying mapping function associating input to output data. In a probabilistic framework, some strategies assume that this function is drawn from a prior Gaussian process (GP). According to Rasmussen and Williams (2006), a Gaussian process can be defined as a collection of random variables (indexed over a continuum), any finite number of which having a joint Gaussian distribution. From this definition, we may enforce some properties for the target function solely 2023 Arthur Leroy, Pierre Latouche, Benjamin Guedj and Servane Gey. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v24/20-1321.html. Leroy, Latouche, Guedj and Gey by characterising the mean and covariance parameters of the process. Since GPs are an example of kernel methods, a broad range of assumptions can be expressed through the definition of the covariance function. We refer to Duvenaud (2014) for a comprehensive review. Despite undeniable advantages, natural implementations for GPs scale cubically with the number of data points, which constitutes a major drawback in many applications. Thereby, the early literature focused on deriving tractable approximations to mitigate this problem (Schwaighofer et al., 2004; Snelson and Ghahramani, 2005; Titsias, 2009; Hensman et al., 2013a). Subsequent reviews (Qui nonero-Candela et al., 2007; Bauer et al., 2016) have also provided standardised formulations and comparisons on this topic. Besides, several approximations have been proposed (Neal, 1997) and implemented (Rasmussen and Nickisch, 2010; Vanhatalo et al., 2013) for tackling the issue of non-Gaussian errors and adapting GPs to a broad variety of likelihoods. Since a GP corresponds to a probability distribution over a functional space, alternate approaches for modelling functional data (Ramsay and Silverman, 2005) should also be mentioned, in particular for our clustering purpose. Unsupervised learning of functional data, also called curve clustering, focuses on the definition of sub-groups of curves, making sense according to an appropriate measure of similarity. When dealing with functional data, the concept of basis functions expansion is of paramount importance for defining smooth and manageable representations of the data. Such a representation allows the adaptation of multivariate methods such as k-means, in combination with B-splines bases for instance (Abraham et al., 2003), to handle curve clustering problems. Different bases can be used, such as Fourier or wavelets (Giacofci et al., 2013), according to the context and the nature of the signal. Besides, model-based clustering methods aims at defining probabilistic techniques for this task, and many approaches (Jiang and Serban, 2012; Jacques and Preda, 2013) have been proposed in this sense for the past decade. In particular, the algorithms fun HDDC (Bouveyron and Jacques, 2011) and fun FEM (Bouveyron et al., 2015) establish a mixture model where representative coefficients of the curves are supposed to come from cluster-specific Gaussian distributions. Furthermore, the authors in Schmutz et al. (2020) introduced an extension to the case of multivariate functional data. A comprehensive review (Jacques and Preda, 2014) has been proposed to discuss and compare the major approaches of this active research area. We can also mention recent works leveraging generalised Bayesian predictors and PAC-Bayes for learning and clustering streams of data (Li et al., 2018; Li and Guedj, 2021), which later inspired a work on clustering (Cohen-Addad et al., 2021). In line with the previous methods that simultaneously analyse multiple curves, we also introduce a framework that takes advantage of similarities between resembling data. The multi-task paradigm consists in using data from several tasks (also called batches or individuals) to improve the learning or predictive capacities compared to an isolated model. It was introduced by Caruana (1997) and then adapted in many fields of machine learning. An initial GP adaptation (Schwaighofer et al., 2004) came as a hierarchical Bayesian model using an expectation-maximisation (EM) algorithm for learning, and a similar approach can be found in Shi et al. (2005). Another hierarchical formulation using a GP to model the mean parameter of another GP was later proposed in Hensman et al. (2013b). Such modelling assumptions resemble those of the present paper, although the strategies used Cluster-Specific Predictions with Multi-Task Gaussian Processes for learning and prediction largely differ. Meanwhile, Yu et al. (2005) offered an extensive study of the relationships between the linear model and GPs to develop a multi-task formulation. More recently, the expression multi-task GP has been coined by Bonilla et al. (2008) for referring to a covariance structure involving inputs and tasks in two separate matrices. Some further developments on this approach were proposed (Hayashi et al., 2012; Rakitsch et al., 2013; Chen et al., 2020) and we can also mention the work of Swersky et al. (2013) on Bayesian hyper-parameter optimisation in such models. Generally, as presented in the review Alvarez et al. (2012) which favours the term multi-output GP, all these frameworks can be expressed as specific cases of the linear model of coregionalisation (LMC) introduced by Goovaerts (1997) in the field of geostatistics. Finally, let us emphasise the algorithm Magma from Leroy et al. (2022) that recently proposed a different multi-task paradigm for GPs, by transferring information through a latent mean process rather than the covariance structure, an intuition that partially appeared before in Shi et al. (2007). This approach offers enhanced performances in forecasting while scaling linearly in the number of tasks, which is noticeably lower than the previous multi-output methods which generally bear a cubic complexity. However, the assumption of a unique mean process might happen to be too restrictive and could benefit from a more general formulation. For instance, Shi and Wang (2008) proposed an idea close to our following model by introducing a curve clustering component to a hybrid splines-GPs multi-task framework, although their approach does not fully account for uncertainty and cannot handle irregular measurements. Moreover, no implementation has been released for their algorithm, and by deriving a unified multi-task GP framework that is more general, we aim at offering to practitioners a powerful tool for tackling the simultaneous clustering and prediction of multiple functional data. Our contributions. The present paper contributes a significant extension of Magma (Leroy et al., 2022), by introducing a clustering component into the procedure. To this end, (i) we introduce a more general model involving multiple mean GPs, each one being associated with a particular cluster. These processes represent the prior mean trend, possibly different from one cluster to another, that is associated with an individual covariance structure for each functional data. Moreover, we propose 4 different modelling hypotheses regarding the kernels hyper-parameters of the GPs. (ii) We derive a variational expectation-maximisation (VEM) algorithm called Magma Clust (available as an R package on the CRAN and at https://github.com/Arthur Leroy/Magma Clust R) for estimating the hyper-parameters along with the hyper-posterior distributions of the mean processes and latent clustering variables. A variational BIC criterion is proposed to estimate the number of clusters. (iii) We enrich this learning step with an additional EM algorithm and analytical formulas to determine both clusters probabilities and predictive distributions for any new, partially observed, individual. The final multi-task prediction can be expressed in terms of cluster-specific distributions or as an overall GPs mixture. The algorithmic complexity of learning and prediction steps are discussed as well. (iv) We illustrate the advantages of our approach on synthetic and three real-life data sets. The results exhibit that Magma Clust outperforms state-of-the-art alternatives on both curve clustering and prediction tasks, in particular for group-structured data sets. Outline of the paper. We introduce the multi-task Gaussian processes mixture model in Section 2, along with notation. Section 3 is devoted to the inference procedure, with a Leroy, Latouche, Guedj and Gey Figure 1: Time series representing the evolution of performances in 100m freestyle competitions between 10 and 20 years for 5 different swimmers, differentiated by colors. Variational Expectation-Maximisation (VEM) algorithm to estimate hyper-parameters and approximation of hyper-posterior distributions along with mixture proportions. We leverage this strategy in Section 4 and derive both a mixture and cluster-specific GP prediction formulas, for which we provide an analysis along with computational costs in Section 5. The performances of our algorithm for clustering and prediction purposes are illustrated in Section 6 with a series of experiments on both synthetic and real-life data sets along with comparisons to competing state-of-the-art algorithms. We close with a summary of our work in Section 7. All proofs to original results are deferred to Section 8. 2. Modelling Before diving into modelling considerations, let us provide a motivational example used throughout the article to illustrate the kind of problems we expect to tackle. 2.1 Motivation Assume that we have observed results of swimming competitions for thousands of athletes from 10 to 20 years old. In order to improve talent detection, one might be interested in using those data to forecast future performances for new young swimmers (e.g observed only between 10 and 14 years old). Such a data set is composed of thousands of irregular ageperformance time series, where each swimmer would have specific number and locations of their data points, as illustrated in Figure 1. These examples come from a real data set, thoroughly described in Section 6, which was originally the applicative motivation for developing methods described in the present paper. The following multi-task GPs framework is tailored to simultaneously allocate individuals into clusters while learning model parameters, and then provide probabilistic predictions for future performances of any young swimmer, by sharing information between resembling individuals through common mean trends. Cluster-Specific Predictions with Multi-Task Gaussian Processes 2.2 Notation To remain consistent with this illustrative example, we refer throughout to the input variables as timestamps and use the term individual as a synonym of batch or task. However, although the temporal formulation helps intuition, the present framework still applies to the wide range of inputs one can usually think of in GP applications. As we suppose the data set to be composed of point-wise observations from multiple functions, the set of all indices is denoted by I N, which in particular contains {1, . . . , M}, the indices of the observed individuals (i.e. the training set). Since the input values are defined over a continuum, let us name T this input space (we can assume T R here for simplicity). Moreover, since the following model is defined for clustering purposes, the set of indices K = {1, . . . , K} refers to the K different groups of individuals. For the sake of concision, the notation is shortened as follows: for any object x, {xi}i = {x1, . . . , x M} and {xk}k = {x1, . . . , x K}. We assume data is collected from M different sources, such that a set of Ni input-output values n t1 i , yi(t1 i ) , . . . , t Ni i , yi(t Ni i ) o constitutes the observations for the i-th individual. Below follows additional convenient notation: ti = {t1 i , . . . , t Ni i }, the set of timestamps for the i-th individual, yi = yi(ti), the vector of outputs for the i-th individual, i=1 ti, the pooled set of all timestamps among individuals, N = card(t), the total number of observed timestamps. Let us stress that the input values may vary both in number and location among individuals, and we refer as a common grid of timestamps to the case where ti = t, i I. Otherwise, we call it an uncommon grid. Besides, in order to define a GP mixture model, a latent binary random vector Zi = (Zi1, . . . , Zi K) needs to be associated with each individual, indicating in which cluster it belongs. Namely, if the i-th individual comes from the k-th cluster, then Zik = 1 and 0 otherwise. Moreover, we assume these latent variables to come from the same multinomial distribution: Zi M(1, π), i I, with a vector of mixing proportions π = (π1, . . . , πK) and K P k=1 πk = 1. 2.3 Model and Assumptions Assuming that the i-th individual belongs to the k-th cluster, we can define its functional expression as the sum of a cluster-specific mean process and an individual-specific centred process: yi(t) = µk(t) + fi(t) + εi(t), t T , µk( ) GP(mk( ), cγk( , )) is the common mean process of the k-th cluster, fi( ) GP(0, ξθi( , )) is the specific process of the i-th individual, Leroy, Latouche, Guedj and Gey Figure 2: Graphical model of dependencies between variables in the multi-task Gaussian Processes mixture model. εi( ) GP(0, σ2 i I) is the error term. This general model depends upon several mean and covariance parameters, fixed as modelling choices, and hyper-parameters to be estimated: k K, mk( ) is the prior mean function of the k-th cluster, k K, cγk( , ) is the covariance kernel with hyper-parameters γk, i I, ξθi( , ) is the covariance kernel with hyper-parameters θi, i I, σ2 i R is the noise variance associated with the i-th individual, i I, we define the shorthand ψθi,σ2 i ( , ) = ξθi( , ) + σ2 i I, Θ = {γk}k , {θi}i , σ2 i i , π , the set of all hyper-parameters of the model. One can note that we assume here the error term to be individual-specific, although we could also assume it to be cluster-specific and thus indexed by k. Such a choice would result in a valid model since the upcoming developments remain tractable if we substitute εk to εi everywhere, and associate σ2 k I with cγk( , ) instead of ξθi( , ). A discussion about additionally available assumptions on the covariance structures follows in Section 2.4. In this paper, we seek an estimation for Θ among the above quantities, whereas the other objects are pre-specified in the model. For instance, the prior mean mk( ) is usually set to zero but could also integrate expert knowledge if available. Furthermore, we assume that: {µk}k are independent, {fi}i are independent, {Zi}i are independent, {εi}i are independent, Cluster-Specific Predictions with Multi-Task Gaussian Processes i I, k K, µk, fi, Zi, εi and all pairwise combinations are independent. We display a graphical model on Figure 2 to enlighten the relationships between the different components. From these hypotheses, we can naturally integrate out fi and derive the conditional prior distribution of yi( ), providing a hierarchical formulation for the model: yi( ) | {Zik = 1, µk( )} GP µk( ), ψθi,σ2 i ( , ) , i I, k K. As a consequence, the output processes {yi( ) | {Zi}i , {µk( )}k}i are also independent (conditionally to the latent variables) from one another. Although this model is expressed in terms of infinite-dimensional GPs, we proceed to the inference using finite-dimensional sets of observations {ti, yi}i. Therefore, we can write the joint conditional likelihood of the model (conditioning on the inputs is omitted throughout the paper for clarity): p({yi}i | {Zi}i , {µk(t)}k, {θi}i , σ2 i i=1 p(yi | Zi, {µk(ti)}k, θi, σi) k=1 p(yi | Zik = 1, µk(ti), θi, σi)Zik k=1 N yi; µk(ti), Ψti θi,σ2 i where i I, Ψti θi,σ2 i = ψθi,σ2 i (ti, ti) = h ψθi,σ2 i (u, v) i u,v ti is a Ni Ni covariance matrix. The mean processes being common to all individuals in a cluster, we need to evaluate their prior distributions on the pooled grid of timestamps t: p({µk(t)}k | {γk}k) = k=1 p(µk(t) | γk) k=1 N µk(t); mk(t), Ct γk , where Ct γk = cγk(t, t) = [cγk(k, ℓ)]k,l t is a N N covariance matrix. Finally, the joint distribution of the clustering latent variables also factorises over the individuals: p({Zi}i | π) = i=1 p(Zi | π) i=1 M(Zi; 1, π) k=1 πZik k . Leroy, Latouche, Guedj and Gey From all these expressions, the complete-data likelihood of the model can be derived: p({yi}i , {Zi}i , {µk(t)}k | Θ) = p({µk(t)}k | γk) i=1 p(yi | Zi, {µk(ti)}k , θi, σ2 i )p(Zi | π) k=1 N µk(t); mk(t), Ct γk M Y πk N yi; µk(ti), Ψti θi,σ2 i This expression would usually serve to estimate the hyper-parameters Θ, although it depends here on latent variables that cannot be evaluated directly. Even if the prior distributions over {Zi}i and {µk(t)}k are independent, the expressions of their respective posteriors would inevitably depend on each other. Nevertheless, it remains possible to derive variational approximations for these distributions that still factorise nicely over the terms Zi, i I, and µk(t), k K. Consequently, the following inference procedure involves a variational EM algorithm that we shall detail after a quick discussion on the optional hypotheses for the model. 2.4 Assumptions on the Covariance Structure Throughout this paper, we detail a common ground procedure that remains consistent regardless of the covariance structure of the considered GPs. Notice that we chose a parametric distinction of the covariance kernels through the definition of hyper-parameters, different from one individual to another. However, there are no theoretical restrictions on the underlying form of the considered kernels, and we indicate a differentiation on the sole hyper-parameters merely for convenience in writing. A usual kernel in the GP literature is known as the exponentiated quadratic kernel (also called sometimes squared exponential or radial basis function kernel). This kernel only depends upon two hyper-parameters θ = {v, ℓ} such as: k EQ x, x = v2 exp Although beyond the scope of the present paper, let us mention the existence of a rich literature on kernel design, properties and combinations: see Rasmussen and Williams (2006, Chapter 4) or Duvenaud (2014) for comprehensive studies. In the algorithm Magma (Leroy et al., 2022), the multi-task aspect is mainly supported by the mean process, although the model also allows information sharing among individuals through the covariance structure. These two aspects being constructed independently, we could think of the model as potentially double multi-task, both in mean and covariance. More precisely, if we assume {{θi}i , σ2 i i} = {θ0, σ2 0}, i I, then all fi are assumed to be different realisations of the same GP, and thus all individuals contributes to the estimation of the common hyper-parameters. Hence, such an assumption that may appear restrictive at first glance actually offers a valuable way to share common patterns between tasks. Furthermore, the same kind of hypothesis can be proposed at the cluster level with {γk}k = γ0, k K. In this case, we would assume that all the clusters mean processes {µk}k share the same covariance structure. This property would indicate that the patterns, Cluster-Specific Predictions with Multi-Task Gaussian Processes θ0 = θi, i I θi = θj, i = j Notation Nb of HPs Notation Nb of HPs γ0 = γk, k K H00 2 H0i M + 1 γk = γl, k = l Hk0 K + 1 Hki M + K Table 1: Summary of the 4 available assumptions on the hyper-parameters, with their respective shortening notation and the associated number of sets of hyper-parameters (HPs) to optimise. or the variations of the curves, are expected to be roughly identical from one cluster to another and that the differentiation would be mainly due to the mean values. Conversely, different covariance structures across kernels offer additional flexibility for the clusters to differ both in position and in trend, smoothness, or any property that could be coded in a kernel. Speaking rather loosely, we may think of these different settings as a trade-offbetween flexibility and information sharing, or as a choice between an individual or collective modelling of the covariance. Overall, our algorithm provides 4 different settings, offering a rather wide range of assumptions for an adequate adaptation to different applicative situations. Note that the computational considerations are also of paramount importance when it comes to optimising a likelihood over a potentially high number of parameters. Hence, we display on Section 2.4 a summary of the 4 different settings, providing a shortening notation along with the associated number of hyper-parameters (or sets of hyper-parameters in the case of θi and γk) that are required to be learnt in practice. 3. Inference Although a fully Bayesian point-of-view could be taken on the learning procedure by defining prior distributions of the hyper-parameters and directly use an MCMC algorithm (Rasmussen and Williams, 2006; Yang et al., 2016) for approximate inference on the posteriors, this approach remains computationally challenging in practice. Conversely, variational methods have proved to be highly efficient to conduct inference in difficult GP problems (Titsias, 2009; Hensman et al., 2013a) and may apply in our context as well. By introducing an adequate independence assumption, we are able to derive a variational formulation leading to analytical approximations for the true hyper-posterior distributions of the latent variables. Then, these hyper-posterior updates allow the computation of a lower bound of the true log-likelihood, thereby specifying the E step of the VEM algorithm (Attias, 2000) that conducts the overall inference. Alternatively, we can maximise this lower bound with respect to the hyper-parameters in the M step for optimisation purpose, to provide estimates. By iterating on these two steps until convergence (pseudo-code in Algorithm 1), the procedure is proved to reach local optima of the lower bound (Boyd and Vandenberghe, 2004), usually in a few iterations. Regarding the previous illustrative example, this learning step allocates swimmers presenting similar progression patterns into dedicated clusters while simultaneously estimating the underlying mean trend of those clusters (E step) and optimising all hyper-parameters controlling each specific covariance structure (M step). For Leroy, Latouche, Guedj and Gey the sake of clarity, the shorthand Z = {Zi}i and µ = {µk(t)}k is used in this section when referring to the corresponding set of latent variables. 3.1 Variational EM Algorithm We seek an appropriate and analytical approximation q(Z, µ) for the exact hyper-posterior distribution p(Z, µ | {yi}i , Θ). Let us first notice that for any distribution q(Z, µ), the following decomposition holds for the observed-data log-likelihood: log p({yi}i | Θ) = KL (q p) + L(q; Θ), (2) KL (q p) = Z Z q(Z, µ) log q(Z, µ) p(Z, µ | {yi}i , Θ) d Z dµ, L(q; Θ) = Z Z q(Z, µ) log q(Z, µ) p(Z, µ, {yi}i | Θ) d Z dµ. Therefore, we expressed the intractable log-likelihood of the model by introducing the Kullback-Leibler (KL) divergence between the approximation q(Z, µ) and the corresponding true distribution p(Z, µ | {yi}i , Θ). The right-hand term L(q; Θ) in (2) defines a so-called lower bound for log p({yi}i | Θ) since a KL divergence is nonnegative by definition. This lower bound depends both upon the approximate distribution q( ) and the hyper-parameters Θ, while remaining tractable under adequate assumptions. By maximising L(q; Θ) alternatively with respect to both quantities, optima for the hyper-parameters shall be reached. To achieve such a procedure, the following factorisation is assumed for the approximated distribution: q(Z, µ) = q Z(Z)qµ(µ). Colloquially, we could say that the independence property that lacks to compute explicit hyper-posterior distributions is imposed. Such a condition restricts the family of distributions from which we choose q( ), and we now seek approximations within this family that are as close as possible to the true hyper-posteriors. 3.1.1 E step In the expectation step (E step) of the VEM algorithm, the lower bound of the marginal likelihood L(q; Θ) is maximised with respect to the distribution q( ), considering that initial or previously estimated values for bΘ are available. Making use of the factorised form previously assumed, we can derive analytical expressions for the optimal distributions over q Z(Z) and qµ(µ). As the computing of each distribution involves taking an expectation with respect to the other one, this suggests an iterative procedure where whether the initialisation or a previous estimation serves in the current optimisation process. Therefore, two propositions are introduced below, respectively detailing the exact derivation of the optimal distributions bq Z(Z) and bqµ(µ) (all proofs are deferred to the corresponding Section 8). Proposition 1 Assume that the hyper-parameters bΘ and the variational distribution bqµ(µ) = KQ k=1 N µk(t); bmk(t), b Ct k are known. The optimal variational approximation bq Z(Z) of the Cluster-Specific Predictions with Multi-Task Gaussian Processes true hyper-posterior p Z | {yi}i , bΘ factorises as a product of multinomial distributions: i=1 M (Zi; 1, τi = (τi1, . . . , τi N) ) , (3) τik = bπk N yi; bmk(ti), Ψti bθi,bσ2 i 2tr Ψti bθi,bσ2 i l=1 bπl N yi; bml(ti), Ψti bθi,bσ2 i 2tr Ψti bθi,bσ2 i , i I, k K. (4) Proposition 2 Assume that the hyper-parameters bΘ and the variational distribution bq Z(Z) = M Q i=1 M (Zi; 1, τi) are known. The optimal variational approximation bqµ(µ) of the true hyper- posterior p µ | {yi}i , bΘ factorises as a product of multivariate Gaussian distributions: k=1 N µk(t); bmk(t), b Ct k , (5) b Ct k = Ct bγk 1 + M P i=1 τik Ψ 1 i bmk(t) = b Ct k Ct bγk 1mk(t) + M P i=1 τik Ψ 1 i yi where the following shorthand notation is used: yi = 1[t ti] yi(t) t t (N-dimensional vector), Ψi = h 1[t,t ti] ψbθi,bσ2 i (t, t ) i t,t t (N N matrix). Notice that the forced factorisation we assumed between Z and µ for approximation purpose additionally offers an induced independence between individuals as indicated by the factorisation in (3), and between clusters in (5). 3.1.2 M step At this point, we have fixed an estimation for q( ) in the lower bound that shall serve to handle the maximisation of L(bq, Θ) with respect to the hyper-parameters. This maximisation step (M step) depends on the initial assumptions on the generative model (Section 2.4), resulting in four different versions for the VEM algorithm (the E step is common to all of them, the branching point is here). Leroy, Latouche, Guedj and Gey Proposition 3 Assume the variational distributions bq Z(Z) = M Q i=1 M (Zi; 1, τi) and bqµ(µ) = k=1 N µk(t); bmk(t), b Ct k to be known. For a set Θ = {{γk}k , {θi}i , σ2 i i , π}, the optimal values are given by: bΘ = argmax Θ E{Z,µ} [ log p({yi}i , Z, µ | Θ) ] , where E{Z,µ} indicates an expectation taken with respect to bqµ(µ) and bq Z(Z). In particular, optimal values for π can be computed explicitly with: i=1 τik, k K. The remaining hyper-parameters are estimated by numerically solving the following maximisation problems, according to the situation. Let us note: Lk (x; m, S) = log N (x; m, S) 1 2tr b Ct k S 1 , Li (x; m, S) = log N (x; m, S) 1 2tr b Cti k S 1 . Then, for hypothesis Hki: bγk = argmax γk Lk bmk(t); mk(t), Ct γk , k K, (bθi, bσ2 i ) = argmax θi,σ2 i Li yi; bmk(ti), Ψti θi,σ2 i For hypothesis Hk0: bγk = argmax γk Lk bmk(t); mk(t), Ct γk , k K, (bθ0, bσ2 0) = argmax θ0,σ2 0 i=1 Li yi; bmk(ti), Ψti θ0,σ2 0 For hypothesis H0i: bγ0 = argmax γ0 k=1 Lk bmk(t); mk(t), Ct γ0 , (bθi, bσ2 i ) = argmax θi,σ2 i Li yi; bmk(ti), Ψti θi,σ2 i For hypothesis H00: bγ0 = argmax γ0 k=1 Lk bmk(t); mk(t), Ct γ0 , Cluster-Specific Predictions with Multi-Task Gaussian Processes (bθ0, bσ2 0) = argmax θ0,σ2 0 i=1 Li yi; bmk(ti), Ψti θ0,σ2 0 Let us stress that, for each sub-case, explicit gradients are available for the functions to maximise, facilitating the optimisation process with gradient-based methods (Hestenes and Stiefel, 1952; Bengio, 2000). The current version of our code implements those gradients and makes use of them within the L-BFGS-B algorithm (Nocedal, 1980; Morales and Nocedal, 2011) devoted to the numerical maximisation. As previously discussed, the hypothesis Hki necessitates to learn M + K sets of hyper-parameters. However, we notice in Proposition 3 that the factorised forms defined as the sum of a Gaussian log-likelihoods and trace terms offer a way to operate the maximisations in parallel on simple functions. Conversely, for the hypothesis H00, only 2 sets of hyper-parameters need to be optimised, namely γ0, and {θ0, σ2 0}. The small number of functions to maximise is explained by the fact that they are defined as larger sums over all individuals (respectively all clusters). Moreover, this context highlights a multi-task pattern in covariance structures, since each individual (respectively cluster) contributes to the learning of shared hyper-parameters. In practice, H00 is far easier to manage, and we generally reach robust optima in a few iterations. On the contrary, the settings with many hyper-parameters to learn, using mechanically less data for each, may lead more often to computational burden or pathological results. The remaining hypotheses, H0i and Hk0, are somehow middle ground situations between the two extremes and might be used as a compromise according to the problem being dealt with. 3.2 Initialisation Below some modelling choices are discussed, in particular the initialisation of some quantities involved in the VEM algorithm: {mk( )}k; the mean functions from the hyper-prior distributions of the associated mean processes {µk( )}k. As it may be difficult to pre-specify meaningful values in the absence of external or expert knowledge, these values are often assumed to be 0. However, it remains possible to integrate information in the model by this mean. However, as exhibited in Proposition 2, the influence of {mk( )}k in hyper-posterior computations decreases rapidly when M grows in a multi-task framework. {γk}k, {θi}i and σ2 i i; the kernel hyper-parameters. We already discussed that the form itself of kernels has to be chosen as well, but once set, we would advise initiating {γk}k and {θi}i with close and reasonable values whenever possible. As usual in GP models, nearly singular covariance matrices and numerical instability may occur for pathological initialisations, in particular for the hypotheses, like Hki, with many hyper-parameters to learn. This behaviour frequently occurs in the GP framework, and one way to handle this issue is to add a so-called jitter term (Bernardo et al., 1998) on the diagonal of the ill-defined covariance matrices. {τik}ik; the estimated individual membership probabilities (or π; the prior vector of clusters proportions). Both quantities are valid initialisation depending on whether we start the VEM iterations by an E step or an M step. If we only want to set the initial proportions of each cluster in the absence of additional information, we Leroy, Latouche, Guedj and Gey may merely specify π and start with an E step. Otherwise, if we insert the results from a previous clustering algorithm as an initialisation, the probabilities τik for each individual and cluster can be fully specified before proceeding to an M step (or to the bqµ(µ) s computing and then the M step). Let us finally stress that the convergence (to local maxima) of VEM algorithms partly depends on these initialisations. Different strategies have been proposed in the literature to manage this issue, among which simulated annealing (Ueda and Nakano, 1998) or repeated short runs (Biernacki et al., 2003). 3.3 Pseudocode The overall algorithm is called Magma Clust (as an extension of the algorithm Magma to cluster-specific mean GPs) and we provide below the pseudo-code summarising the inference procedure. The corresponding R code is available at https://github.com/Arthur Leroy/ MAGMAclust. Algorithm 1 Magma Clust: Variational EM algorithm Initialise {mk(t)}k, Θ = {γk}k , {θi}i , σ2 i i and {τini i }i (or π). while not converged do E step: Optimise L(q; Θ) w.r.t. q( ): bq Z(Z) = M Q i=1 M(Zi; 1, τi). bqµ(µ) = KQ k=1 N(µk(t); bmk(t), b Ct k). M step: Optimise L(q; Θ) w.r.t. Θ: bΘ = argmax Θ EZ,µ [ log p({yi}i , Z, µ | Θ) ] . end while return bΘ, {τi}i { bmk(t)}k, {b Ct k}k. 3.4 Model Selection The question of the adequate choice of K in clustering applications is a recurrent concern in practice. Many criteria have been introduced in the literature, among which those relying on penalisation of the likelihood like AIC (Akaike, 1974) or BIC (Schwarz, 1978) for instance. Whereas we seek a BIC-like formula, we can recall that the likelihood p({yi}i | bΘ) cannot be computed directly in the present context. However, as for inference, we may still use the previously introduced lower bound L(bq; bΘ) to adapt a so-called variational-BIC (VBIC) quantity to maximise, as proposed in You et al. (2014). The expression of this criterion is provided below while we defer the full derivation of the lower bound to Section 8.4. Proposition 4 After convergence of the VEM algorithm, a variational-BIC expression can be derived as: BICvar(K) = L(bq; bΘ) card{HP} Cluster-Specific Predictions with Multi-Task Gaussian Processes log N yi; bmk(ti), Ψti bθi,bσ2 i 2tr b Ct kΨti bθi,bσ2 i 1 + log c πk τik log N bmk(t); mk(t), Ct bγk 2tr b Ct k Ct bγk 1 2 log b Ct k + N log 2π + N αi + αk + (K 1) αi is the number of hyper-parameters from the individual processes kernels, αk is the number of hyper-parameters from the mean processes kernels, K 1 is the number of free parameters c πk (because of the constraint K P k=1 πk = 1 ). Let us mention that the numbers αi and αk in the penalty term vary according to the considered modelling hypothesis (H00, Hk0, H0i or Hki), see Section 2.4 for details. 4. Prediction At this point, we would consider that the inference on the model is completed, since the training data set of observed individuals {yi}i enabled to estimate the desired hyperparameters and the distributions of latent variables. For the sake of concision, we thus omit the writing of conditionings over bΘ in the sequel. Recalling our illustrative example, we would have used competition s results over a long period from thousands of swimmers for training the model, and we now expect to make predictions of future performances for any young athlete in the early stages of their career. Therefore, let us assume the partial observation of a new individual, denoted by the index , for whom we collected a few data points y (t ) at timestamps t . Defining a multi-task GPs mixture prediction consists in seeking an analytical distribution p(y ( ) | y (t ), {yi}i), according to the information brought by: its own observations; the training data set; the cluster structure among individuals. As we aim at studying the output values y ( ) at arbitrarily chosen timestamps, say tp (the index p stands for prediction), we propose a new notation for the pooled vector of timestamps . This vector serves as a working grid on which the different distributions involved in the prediction procedure are evaluated. In the absence of external restrictions, we would strongly advise to include the observed timestamps of all training individuals, t, within tp , since evaluating the processes at these locations allows for sharing information across tasks. Otherwise, any data points defined on timestamps outside of the working grid would be discarded from the multi-task aspect of the model. In particular, if tp = t, we may even use directly the variational distribution qµ(µ) computed in the VEM algorithm, and thus skip one step of the prediction procedure that is described below. Throughout the section, we aim at defining a probabilistic prediction for this new individual, accounting for the information of all training data {yi}i. To this end, we manipulate several distributions of the Leroy, Latouche, Guedj and Gey tp = t tp = t H00 2-3bis-4-5 1-2-3bis-4-5 Hk0 2-3bis-4-5 1-2-3bis-4-5 H0i 2-3-4-5 1-2-3-4-5 Hki 2-3-4-5 1-2-3-4-5 Table 2: Summary of the different steps to perform in the prediction procedure, according to the model assumptions and the target grid of timestamps. type p( | {yi}i) and refer to them with the adjective multi-task. Additionally to highlighting the information-sharing aspect across individuals, this term allows us to distinguish the role of {yi}i from the one of the newly observed data y (t ), which are now the reference data for establishing if a distribution is called a prior or a posterior. Deriving a predictive distribution in our multi-task GP framework requires to complete the following steps. 1. Compute the hyper-posterior approximation of {µk( )}k at tp : bqµ({µk(tp )}k), 2. Deduce the multi-task prior distribution: p(y (tp ) | Z , {yi}i), 3. Compute the new hyper-parameters {θ , σ2 } and p(Z | y (t ), {yi}i) via an EM, 3bis. Assign θ = θ0, σ2 = σ2 0 and compute directly p(Z | y (t ), {yi}i), 4. Compute the multi-task posterior distribution: p(y (tp) | y (t ), Z , {yi}i), 5. Deduce the multi-task GPs mixture prediction: p(y (tp) | y (t ), {yi}i). We already discussed the influence of the initial modelling hypotheses on the overall procedure. Hence, let us display in Section 4 a quick reminder helping to keep track of which steps need to be performed in each context. 4.1 Posterior Inference on the Mean Processes In order to integrate the information contained in the shared mean processes, we first need to re-compute the variational approximation of {µk( )}k s hyper-posterior on the new Ndimensional working grid tp . By using once more Proposition 2, it appears straightforward to derive this quantity that still factorises as a product of Gaussian distributions where we merely substitute the values of timestamps: bqµ({µk(tp )}k) = k=1 N µk(tp ); bmk(tp ), b Ctp k , b Ctp k = Ctp bγk i=1 τik Ψ 1 i Cluster-Specific Predictions with Multi-Task Gaussian Processes bmk(tp ) = b Ctp k 1mk(tp ) + M P i=1 τik Ψ 1 i yi where the following shorthand notation is used: yi = 1[t ti] yi(t) t tp ( N-dimensional vector), Ψi = h 1[t,t ti] ψbθi,bσ2 i (t, t ) i t,t tp ( N N matrix). We acknowledge that the subsequent analytical developments party rely on this variational approximate distribution bqµ({µk(tp )}k), and may thus be considered, in a sense, as approximated as well. However, this quantity provides a valuable closed-form expression that we can substitute to the true hyper-posterior in Proposition 5 below. 4.2 Computation of the Multi-Task Prior Distributions For a sake of completeness, let us recall the equivalence between two ways of writing conditional distributions that are used in the subsequent results: k=1 p( | Z k = 1)Z k. We may regularly substitute one to the other in the sequel depending on the handier in the context. Once the mean processes distributions are re-computed on the working grid, their underlying influence shall be directly plugged into a marginalised multi-task prior over y (tp ) by integrating out the {µk(tp )}k. As the mean processes vanish, the new individual s outputs y (tp ) directly depends upon the training data set {yi}i, as highlighted in the proposition below. Proposition 5 For a set of timestamps tp , the multi-task prior distribution of y knowing its clustering latent variable is given by: p(y (tp ) | Z , {yi}i) = k=1 N y (tp ); bmk(tp ), b Ctp k + Ψtp θ ,σ2 Proof Let us recall that, conditionally to their mean process, the individuals are independent of one another. Then, for all k K, we have: p(y (tp ) | Z k = 1, {yi}i) = Z p(y (tp ), µk(tp ) | Z k = 1, {yi}i) dµk(tp ) = Z p(y (tp ) | µk(tp ), Z k = 1) p(µk(tp ) | Z k = 1, {yi}i) | {z } qµ(µk(tp )) Z N y (tp ); µk(tp ), Ψtp θ ,σ2 N µk(tp ); bmk(tp ), b Ctp k dµk(tp ) = N y (tp ); bmk(tp ), b Ctp k + Ψtp θ ,σ2 Leroy, Latouche, Guedj and Gey The final line is obtained by remarking that such a convolution of Gaussian distributions remains Gaussian as well (Bishop, 2006, Chapter 2), and we refer to Leroy et al. (2022) for the detailed calculus in this exact context. Therefore, we finally get: p(y (tp ) | Z , {yi}i) = k=1 p(y (tp ) | Z k = 1, {yi}i)Z k k=1 N y (tp ); bmk(tp ), b Ctp k + Ψtp θ ,σ2 4.3 Optimisation of the New Hyper-Parameters and Computation of the Clusters Probabilities Now that the mean processes have been removed at the previous step, this section strongly resembles the classical learning procedure through an EM algorithm for a Gaussian mixture model. In our case, it allows us both to estimate the hyper-parameters of the new individual {θ , σ } and to compute the hyper-posterior distribution of its latent clustering variable Z , which provides the associated clusters membership probabilities τ . As before, E steps and M steps are alternatively processed until convergence, but this time by working with exact formulations instead of variational ones. 4.3.1 E step In the E step, hyper-parameters estimates are assumed to be known. Recalling that the latent clustering variable Z is independent from the training data {yi}i, the multi-task hyper-posterior distribution maintains an explicit derivation: p(Z | y (t ), {yi}i , bθ , bσ2 , bπ) p(y (t ) | Z , {yi}i , bθ , bσ2 )p(Z | bπ) N y (t ); bmk(t ), b Ct k + Ψt bθ ,bσ2 l=1 bπZ l l bπk N y (t ); bmk(t ), b Ct k + Ψt bθ ,bσ2 By inspection, we recognise the form of a multinomial distribution and thus retrieve the corresponding normalisation constant to deduce: p(Z | y (t ), {yi}i , bθ , bσ2 , bπ) = M (Z ; 1, τ = (τ 1, . . . , τ K) ) , (7) τ k = bπk N y (t ); bmk(t ), b Ct k + Ψt bθ ,bσ2 l=1 bπl N y (t ); bml(t ), b Ct l + Ψt bθ ,bσ2 Cluster-Specific Predictions with Multi-Task Gaussian Processes 4.3.2 M step Assuming to know the value of τ , we may derive optimal values for the hyper-parameters of the new individual through the following maximisation: {bθ , bσ2 } = argmax θ ,σ EZ [ log p(y (t ), Z | {yi}i , θ , σ , bπ) ] . Let us note L (θ , σ ) = log p(y (t ), Z | {yi}i , θ , σ , bπ). By remarking that bπ has already been estimated previously, we may easily derive the expression to maximise with respect to θ and σ in practice: EZ [ L (θ , σ ) ] = EZ [ log p(y (t ), Z | {yi}i , θ , σ , bπ ]) = EZ [ log p(y (t ) | Z , {yi}i , θ , σ ) + log p(Z | bπ) ] k=1 N y (t ); bmk(t ), b Ct k + Ψt θ ,σ2 k=1 EZ [ Z k ] log N y (t ); bmk(t ), b Ct k + Ψt θ ,σ2 k=1 τ k log N y (t ); bmk(t ), b Ct k + Ψt θ ,σ2 where C1 is a constant term. Thus, the optimisation in this case merely relies on the maximisation of a weighted sum of Gaussian log-likelihoods, for which gradients are well-known. 3bis. In the case where the hyper-parameters are supposed to be common across individuals (H00 or Hk0), there is no need to additional optimisation since we already have bθ = bθ0 and bσ2 = bσ0 by definition. However, the probabilities of lying in each cluster τ for the new individual still need to be computed, which shall be handled by directly using the expression (8) from the E step. 3ter. Conversely, even if hyper-parameters for each individual are supposed to be different (H0i or Hki), it remains possible to avoid the implementation of an EM algorithm by stating τ = bπ. Such an assumption intuitively expresses that we would guess the membership probabilities of each cluster from the previously estimated mixing proportions, without taking new individual s observations into account. Although we would not recommend this choice for getting optimal results, it still seems to be worth a mention for applications with a compelling need to avoid EM s extra computations during the prediction process. 4.4 Computation of the Multi-Task Posterior Distributions Once the needed hyper-parameters have been estimated and the prior distribution established, the classical formula for GP predictions can be applied to the new individual, for each possible latent cluster. First, let us recall the prior distribution by separating observed from target timestamps, and introducing a shorthand notation for the covariance: p(y (tp ) | Z k = 1, {yi}i) = N y (tp) y (t ) ; bmk(tp) bmk(t ) , Γtptp k Γtpt k Γt tp k Γt t k Leroy, Latouche, Guedj and Gey where Γtp,tp k = b Ctp k + Ψtp θ ,σ2 and likewise for the other blocks of the matrices. Therefore, recalling that conditioning on the sub-vector of observed values y (t ) maintains a Gaussian distribution (Bishop, 2006; Rasmussen and Williams, 2006), we can derive the multi-task posterior distribution for each latent cluster: p(y (tp) | Z k = 1, y (t ), {yi}i) = N y (tp); bµ k(tp), bΓ tp k , k K, (9) bµ k(tp) = bmk(tp) + Γtpt k Γt t k 1 (y (t ) bmk(t )) , k K, k = Γtptp k Γtpt k Γt t k 1Γt tp k , k K. 4.5 Computation of the Multi-Task GPs Mixture Prediction To conclude, by summing over all possible combinations for the latent clustering variable Z , we can derive the final predictive distribution. Proposition 6 The multi-task GPs mixture posterior distribution for y (tp) has the following form: p(y (tp) | y (t ), {yi}i) = k=1 τ k N y (tp); bµ k(tp), bΓ tp Proof Taking advantage of (9) and the multi-task hyper-posterior distribution of Z as computed in (7), it is straightforward to integrate out the latent clustering variable: p(y (tp) | y (t ), {yi}i) = X Z p(y (tp), Z | y (t ), {yi}i) Z p(y (tp) | Z , y (t ), {yi}i)p(Z | y (t ), {yi}i) τ k p(y (tp) | Z k = 1, y (t ), {yi}i) Z k τ k N y (tp); bµ k(tp), bΓ tp k=1 τ k N y (tp); bµ k(tp), bΓ tp where we recall for the transition to the last line that Z k = 1 if the -th individual belongs to the k-th cluster and Z k = 0 otherwise. Hence, summing a product with only one non-zero exponent over all possible combination for Z is equivalent to merely sum over the values of k, and the variable Z k simply vanishes. Cluster-Specific Predictions with Multi-Task Gaussian Processes 4.6 Alternative Predictions Even though Proposition 6 provides an elegant probabilistic prediction in terms of GPs mixture, it remains important to notice that this quantity is no longer a Gaussian distribution. In particular, the distribution of an output value at any point-wise evaluation is expected to differ significantly from a classical Gaussian variable, by being multi-modal for instance. This property is especially true for individuals with high uncertainty about the clusters they probably belong to, whereas the distribution would be close to the a when τ k 1 for one cluster and almost zero for the others. While we believe that such a GPs mixture distribution highlights the uncertainty resulting from a possible cluster structure in data and offers a rather original view on the matter of GP predictions, some applications may suffer from this non-Gaussian final distribution. Fortunately, it remains pretty straightforward to proceed to a simplification of the clustering inference by assuming that the -individual only belongs to its more probable cluster, which is equivalent to postulate max{τ k}k = 1 and the others to be zero. In this case, the final Gaussian mixture turns back into a Gaussian distribution, and we retrieve a uni-modal prediction, easily displayed by its mean along with credible intervals. 5. Complexity Analysis for Training and Prediction It is customary to stress that computational complexity is of paramount importance in GP models as a consequence of their usual cubic (resp. quadratic) cost in the number of data points for learning (resp. prediction). In the case of Magma Clust, we use information from M individuals scattered into K clusters, each of them providing Ni observations, and those quantities mainly specify the overall complexity of the algorithm. Moreover, N refers to the number of distinct timestamps (i.e. N M P i=1 Ni) in the training data set and corresponds to the dimension of the objects involved in the kernel-specific mean processes computations. Typically, the learning complexity would be proportional to one iteration of the VEM algorithm, which requires O M N3 i + K N3 operations. Let us stress that this complexity is linear in the number of tasks, which is significantly lower than classical multi-output GP approaches in the literature. As detailed in Alvarez et al. (2012), algorithms that can be formulated as linear models of coregionalisations typically present a O(M3N3) training complexity when not resorting to sparse approximations. Several approximations have been developed to reduce this cost, for instance, by using pseudo-inputs to decrease the N3 term, while lowering the M3 term generally requires underlying independence assumptions ( Alvarez and Lawrence, 2011). In the Experiments section, the computational advantage of Magma Clust is also empirically highlighted in Table 5. From a practical point of view, the hypotheses formulated on the hyper-parameters may influence the constant of this complexity but generally not in more than an order of magnitude. For instance, the models under the assumption H00 usually require less optimisation time in practice, although it does not change the number or the dimensions of the covariance matrices to inverse, which mainly control the overall computing time. The dominating terms in this expression depend on the context, regarding the relative values of M, Ni, N and K. In contexts where the number of individuals M dominates, like with Leroy, Latouche, Guedj and Gey small common grids of timestamps for instance, the left-hand term would control the complexity, and clustering s additional cost would be negligible. Conversely, for a relatively low number of individuals or a large size N for the pooled grid of timestamps, the right-hand term becomes the primary burden, and the computing time increases proportionally to the number of clusters compared to the original Magma algorithm. During the prediction step, the re-computation of {µk( )}k s variational distributions implies K inversions of covariance matrices with dimensions depending on the size of the prediction grid tp . In practice though, if we fix a fine grid of target timestamps in advance, this operation can be assimilated to the learning step. In this case, the prediction complexity remains at most in the same order as the usual learning for a single-task GP, that is O(K N3 ) (this corresponds to the estimation of the new individual s hyper-parameters, and would decrease to O(K N2 ) for Hk0 or H00). We shall mention that the definition of a fine grid is generally desirable only for low-dimensional applications since we may quickly reach running time or memory limits as the input s dimension grows. In many contexts, most of the time-consuming learning steps can be performed in advance, and the immediate prediction cost for each new individual is negligible in comparison (generally comparable to a single-task GP prediction). 6. Experiments The present section is dedicated to the evaluation of Magma Clust on both synthetic and real data sets. The performance of the algorithm is assessed in regards to its clustering and forecast abilities. To this purpose, we introduce below the simulation scheme generating the synthetic data along with the measures used to compare our method to alternatives quantitatively. Throughout, the exponentiated quadratic (EQ) kernel, as defined in Equation (1), serves as covariance structure for both generating data and modelling. The manipulation of more sophisticated kernels remains a topic beyond the scope of the present paper, and the EQ proposes a fair common ground for comparison between methods. Thereby, each kernel introduced in the sequel is associated with two hyper-parameters. Namely, v R+ represents a variance term whereas ℓ R+ specifies the lengthscale. The synthetic data sets are generated following the general procedure below, with minor modifications according to the model assumptions H00, Hk0, H0i or Hki: 1. Define a random working grid t [ 0, 10 ] of N = 200 timestamps to study M = 50 individuals, scattered into K clusters, 2. Draw the prior mean functions for {µk( )}k: mk(t) = at + b, t t, k K, where a [ 2, 2 ] and b [ 20, 30 ], 3. Draw uniformly hyper-parameters for {µk( )}k s kernels: γk = {vγk, ℓγk}, k K, where vγk 1, e3 and ℓγk 1, e1 , (or γ0 = {vγ0, ℓγ0}), 4. Draw µk(t) N mk(t), Ct γk , k K, 5. For all i I, draw uniformly the hyper-parameters for individual kernels θi = {vθi, ℓθi}, where vθi 1, e3 , ℓθi 1, e1 , and σ2 i [ 0, 0.1 ], (or θ0 = {vθ0, ℓθ0} and σ2 0), Cluster-Specific Predictions with Multi-Task Gaussian Processes 6. Define π = ( 1 K , . . . , 1 K ) and draw Zi M(1, π), i I, 7. For all i I and Zik = 1, draw uniformly a random subset ti t of Ni = 30 timestamps, and draw yi N µk(ti), Ψti θi,σ2 i This procedure offers data sets for both the individuals {ti, yi}i and the underlying mean processes {t, µk(t)}k. In the context of prediction, a new individual is generated according to the same scheme, although its first 20 data points are assumed to be observed while the remaining 10 serve as testing values. While it may be argued that this repartition 20-10 is somehow arbitrary, a more detailed analysis with changing numbers of observed points in Leroy et al. (2022) revealed a low effect on the global evaluation. Unless otherwise stated, we fix the number of clusters to be K = 3 and the model assumption to be H00 for generating the data. Let us recall that we provided a variational-BIC formula in Proposition 4 to select an appropriate number of clusters K from data. Therefore, this measure is evaluated in following experiments and used for model selection purposes in the real-life application. Besides, the adjusted rand index (ARI) (Hubert and Arabie, 1985) is used as a measure of adequacy for comparison between the groups obtained through the clustering procedure and the true clusters that generated the data. More specifically, the ARI is defined by counting the proportions of matching pairs between groups, and a value of 1 represents a perfect correspondence. One can note that the ARI still applies when it comes to evaluating clustering partitions with different numbers of clusters. On the matter of prediction, the mean square error (MSE) between predicted means and the true values offers a measure of the average forecast performance. Formally, we define the MSE in prediction on the 10 testing points for the new individual as: ypred (tu ) ytrue (tu ) 2 . Moreover, an additional measure accounting for the validity of uncertainty quantification is defined in Leroy et al. (2022) as the percentage of true data effectively lying within the 95% credible interval (CI95), which is constructed from the predictive distribution. We extend here this measure to the context of GPs mixture, where CI95 is no longer available directly (as for any multi-modal distribution). Namely, the weighted CI95 coverage (WCIC95) is defined to be: k=1 τ k 1{ytrue (tu ) CIk 95}, where CIk 95 represents the CI95 computed for the k-th cluster-specific Gaussian predictive distribution (9). In the case where K = 1, i.e. a simple Gaussian instead of a GPs mixture, the WCIC95 reduces to the previously evoked CI95 coverage. By averaging the weighted cluster-specific CIk 95 coverage, we still obtain an adequate and comparable quantification of the uncertainty relevance for our predictions. By definition, the value of this indicator should be as close as possible to 95%. Finally, the mean functions {mk( )}k are set to be 0 in Magma Clust, as usual for GPs, whereas the membership probabilities τik are initialised thanks to a preliminary k-means algorithm. Leroy, Latouche, Guedj and Gey 6.1 Illustration on Synthetic Examples Figure 3 provides a comparison on the same data set between a classical GP regression (top), the multi-task GP algorithm Magma (middle), and the multi-task GPs mixture approach Magma Clust (bottom). On each sub-graph, the plain purple line represents the mean parameter from the predictive distribution, and the pink shaded area covers the CI95. The dashed lines stand for the multi-task prior mean functions { bmk( )}k resulting from the estimation of the mean processes. The points in black are the observations for the new individual , whereas the red points constitute the true target values to forecast. Moreover, the colourful background points depict the data of the training individuals, which we colour according to their true cluster in Magma Clust displays (bottom). For the sake of fairness, the prior mean for standard GP has been set as the average of training data points. As expected, the single GP regression provides an adequate fit close to the data points before quickly converging to the prior value when lacking information. Conversely, Magma takes advantage of its multi-task component to share knowledge across individuals by estimating a more relevant mean process. However, this unique mean process appears unable to account for the clear group structure, although adequately recovering the dispersion of the data. In the case of Magma Clust, we display the cluster-specific prediction (9) for the most probable group instead of the GPs mixture prediction, since max k (τ ) = 1 in this example. The model selection method based on maximum VBIC values correctly retrieved the true number of cluster K = 3. To illustrate the training procedure dynamics in this example, we provide on Figure 4 a tracking, for each iteration of the VEM algorithm until convergence, of the evidence lower bound (ELBO) and the corresponding ARI between predicted and true clusters. As the ELBO increases, we can notice that the ARI also improves to finally reach 1 at convergence, which means we fully retrieved the true clusters at the end of training. Although this simple example only depicts a single-run instance, it provides an accurate intuition on the general behaviour of Magma Clust. In practice, the algorithm quickly improves the clustering structure and associated mean processes during the first two steps, and generally converges in a handful of iterations. Overall, this illustrative example highlights the benefit we can get from considering group-structured similarities between individuals in GP predictions. Notice that our method offers both a significant improvement in mean prediction and a narrowed uncertainty around this value. Additionally, we display on Figure 5 the specific predictions according to the two remaining clusters (although associated with 0 probabilities). We remark that the predictions move towards the cluster-specific mean processes as soon as the observations become too distant. In this idealised example, we displayed Gaussian predictive distributions for convenience though, in general, a Gaussian mixture might rarely be unimodal. Therefore, we propose in Figure 6 another example with a higher variance and overlapping groups, where the VBIC still provides the correct number of clusters. While the ARI between predicted and true clusters was equal to 1 (perfect match) in the previous example, it now decreases to 0.78. Moreover, the vector of membership probabilities associated with the Figure 6 for the predicted individual happens to be: τ = (0.95, 0.05, 0). The left-hand graph provides an illustration of the predictive mean, acquired from the multi-task GPs mixture distribution described in Proposition 6. We may notice that this curve lies very Cluster-Specific Predictions with Multi-Task Gaussian Processes Figure 3: Prediction curves (purple) with associated 95% credible intervals (pink) from GP regression (top), Magma (middle) and Magma Clust (bottom). The dashed lines represent the mean parameters from the mean processes estimates. Observed data points are in black, testing data points are in red. Backward points are the observations from the training data set, coloured relatively to individuals (middle) or clusters (bottom). Leroy, Latouche, Guedj and Gey Figure 4: Left: Values of the evidence lower bound (ELBO) for successive iterations of the VEM algorithm during Magma Clust training. Right: The corresponding ARI values between predicted and true clusters for the same iterations. In this illustrative example, the algorithm reached convergence after 7 iterations only. Figure 5: Cluster-specific prediction curves (purple) with associated 95% credible intervals (pink) from Magma Clust, for two unlikely clusters. The dashed lines represent the mean parameters from the mean processes estimates. Observed data points are in black, testing data points are in red. Backward points are the observations from the training data set, coloured by clusters. Cluster-Specific Predictions with Multi-Task Gaussian Processes Figure 6: Left: GPs mixture mean prediction curve (blue) from Magma Clust. Right: heatmap of probabilities for the GPs mixture predictive distribution from Magma Clust. The dashed lines represent the mean parameters from the mean processes estimates. Observed data points are in black, testing data points are in red. Backward points are the observations from the training data set, coloured by clusters. close to one cluster s mean although not completely overlapping it, because of the τ k = 0.05 probability for another cluster, which slightly pulls the prediction onto its own mean. Besides, the right-hand graph of Figure 6 proposes a representation of the multi-task GPs mixture distribution as a heatmap of probabilities for the location of our predictions. This way, we can display, even in this multi-modal context, a thorough visual quantification for both the dispersion of the predicted values and the confidence we may grant to each of them. Finally, let us propose on Figure 7 an illustration of the capacity of Magma Clust to retrieve the shape of the underlying mean processes, by plotting their estimations { bmk( )}k (dotted lines) along with the true curves (plain coloured lines) generated by the simulation scheme. The ability to perform this task generally depends on the structure of the data as well as on the initialisation, although we may observe satisfactory results both on the previous fuzzy example (left) and on a well-separated case (right). It should be noted that the mean processes estimations also come with uncertainty quantification, albeit not displayed on Figure 7 for the sake of clarity. 6.2 Clustering Performance Generally, many curve clustering methods struggle to handle irregularly observed data directly. Therefore, for the sake of fairness and to avoid introducing too many smoothing biases in alternative methods, the data sets used in the following are sampled on regular grids, although Magma Clust can deal with irregular measurements by construction (see also Leroy et al. (2022) for an empirical study). The competing algorithms are the B- Leroy, Latouche, Guedj and Gey Figure 7: Left: fuzzy case. Right: well-separated case. Curves of the simulated underlying mean processes, coloured by clusters. The dashed lines represent the mean parameters from the mean processes estimates. Backward points are the observations from the training data set, coloured by clusters. splines expansion associated with a k-means algorithm proposed in Abraham et al. (2003), fun HDDC (Bouveyron and Jacques, 2011), and fun FEM (Bouveyron et al., 2015). The two latter methods were introduced to handle curve clustering problems, by taking advantage of a functional latent mixture modelling, and demonstrated their ability in several applications (Schmutz et al., 2020). A naive multivariate k-means is used as initialisation for fun HDDC, fun FEM, and Magma Clust. We propose on Figure 8 an evaluation of each algorithm in terms of ARI over 100 data sets, simulated from various schemes. First, the procedure detailed in Section 6 is applied for each of the 4 different hypotheses (Hki, Hk0, H0i, H00) to generate data in accordance with our modelling assumptions. Additionally, we propose an alternative simulation scheme, inspired by Bouveyron et al. (2015), to compare performances on data sets that are not tailored for our method. We name this procedure Scheme A, and each of the 100 data sets is made of 50 curves, generated randomly, allocated into 4 clusters, and observed at 30 common time points such that: Cluster 1: y(t) = U + 0.5 (1 U) (2.5 |t 2.5|)+ + ϵ, t [0, 10], Cluster 2: y(t) = U + 0.5 (1 U) (2.5 |t 7.5|)+ + ϵ, t [0, 10], Cluster 3: y(t) = U + (1 U) (2.5 |t 2.5|)+ + ϵ, t [0, 10], Cluster 4: y(t) = U + (1 U) (2.5 |t 7.5|)+ + ϵ, t [0, 10], where U U([0, 1]) and ϵ N(0, 0.05). In all situations, Magma Clust seems to outperform the alternatives. In particular, our approach provides ARI values consistently close to 1 and a lower variance in all contexts. Furthermore, while performances of the other methods are expected to deteriorate because of additional smoothing procedures in the case of Cluster-Specific Predictions with Multi-Task Gaussian Processes Figure 8: Adjusted rand index (ARI) values between the true clusters and the partitions estimated by k-means, fun HDDC, fun FEM, and Magma Clust. The value of K is set to the true number of clusters for all methods. The ARI is computed on 100 data sets for each assumption Hki, Hk0, H0i, H00, and Scheme A. Figure 9: Adjusted Rand Index values between the true clusters and the partitions estimated by Magma Clust with respect to the values of K used as setting. The ARI is computed on the same 100 data sets for each value of K. (3 : the true number of clusters for all data sets) Leroy, Latouche, Guedj and Gey Selected K M = 50 M = 100 True K* 1 2 3 4 5 6 1 2 3 4 5 6 1 100 0 0 0 0 0 100 0 0 0 0 0 2 10 90 0 0 0 0 2 96 2 0 0 0 3 0 22 74 2 2 0 2 14 84 0 0 0 4 4 10 16 58 10 2 2 10 8 80 0 0 5 0 10 16 20 52 2 0 0 14 18 68 0 Table 3: Percentage of data sets for which the true number of cluster is K , and the number of cluster selected by the VBIC is K. A total of 50 data sets were simulated for different values K = 1, . . . , 5, and Magma Clust was tested on each with varying settings K = 1, . . . , 6, to retain the configuration that reaches the highest VBIC value. The data sets are composed of M = 50 (left) or M = 100 (right) individuals with N = 30 common timestamps, under the hypothesis H00. irregular grids, Magma Clust would run the same without any change. On another aspect, Figure 9 provides some insights into the robustness of Magma Clust to an incorrect setting of K, the number of clusters. For 100 data sets with a true value K = 3, the ARI has been computed between the true partitions and the ones estimated by Magma Clust initialised with different settings K = 2, . . . , 10. Except for K = 2 where the low number of clusters prevents from getting enough matching pairs by definition, we may notice relatively unaffected performances as K increases. Despite a non-negligible variance in results, the partitions remain consistent overall, and the clustering performances of Magma Clust seem pretty robust to misspecification of K. 6.3 Model Selection Performance To remain on the matter of clusters number, the model selection abilities of the proposed VBIC (Proposition 4) maximisation procedure are assessed on simulated data. For different true numbers of clusters, 50 data sets were simulated according to the previous scheme, and Magma Clust was run successively on each with different settings of K. The setting that reaches the highest value of VBIC is selected as the best model. The percentages of selection for each true K and the corresponding values K retained through VBIC are reported in Table 3. The procedure seems to adequately select the number of clusters in most context, with results that deteriorate as K grows and a tendency to over-penalise, which is a classical behaviour with BIC in practice (Weakliem, 2016). As the typical BIC formula relies on asymptotic approximations, we also ran the simulation for different numbers of individuals M = 50 and M = 100. It may be noticed that the VBIC performs better as M increases, as expected. Such a property appears reassuring since the following real-data applications involves data sets gathering around M = 103 individuals. Cluster-Specific Predictions with Multi-Task Gaussian Processes K MSE WCIC95 Training time Prediction time 2 7.7 (18.4) 92 (20.3) 70.4 (25) 0.4 (0.1) 3* 3.7 (8.1) 95 (13.2) 97.7 (33.2) 0.5 (0.1) 4 3.2 (5.3) 94.9 (13.6) 116.5 (47.3) 0.6 (0.2) 5 3.2 (5.6) 94.4 (14.3) 133 (40.8) 0.6 (0.2) 6 3.1 (5.4) 94.4 (13.6) 153.3 (42) 0.8 (0.3) 7 4 (9) 93.6 (15.4) 173.7 (45.1) 1 (0.4) 8 4.7 (13) 93.8 (16) 191.3 (44.7) 1 (0.3) 9 4.1 (9.5) 94 (14.6) 211.6 (52) 0.8 (0.4) 10 4.5 (14.8) 94.4 (14.4) 235 (52.7) 1.8 (1.4) Table 4: Average (sd) values of MSE, WCIC95, training and prediction times (in secs) on 100 runs for different numbers of clusters as setting for Magma Clust. (3 : the true number of clusters for all data sets) 6.4 Prediction Performance Another piece of evidence for the robustness to a wrong selection of K is highlighted by Table 4 in the context of forecasting. The predictive aspect of Magma Clust remains the main purpose of the method and its performances of this task partly rely on the adequate clustering of the individuals. It may be noticed on Table 4 that both MSE and WCIC95 regularly but slowly deteriorate as we move away from the true value of K. However, the performances remain of the same order, and we may still be confident about the predictions obtained through a misspecified running of Magma Clust. In particular, the values of MSE happen to be even better when setting K = 4, . . . , 6 (we recall that the same 100 data sets are used in all cases, which can thus be readily compared). Besides, the right-hand part of the table provides indications on the relative time (in seconds) that is needed to train the model for one data set and to make predictions. As expected, both training and prediction times increase roughly linearly with the values of K, which seems consistent with the complexities exposed in Section 5. This property is a consequence of the extra mean processes and hyper-parameters that need to be estimated as K grows. Nonetheless, the influence of K is lesser on the prediction time, which yet remains negligible, even when computing many group-specific predictions. To pursue the matter of prediction, we provide on Table 5 the comparison of forecasting performances between our method and several state-of-the art alternatives. We use the classical (i.e. single-task) GP regression as benchmark, as we would expect all the competing multi-tasks methods to perform better, considering the additional information that can be shared between individuals. The Magma algorithm (Leroy et al., 2022) (which is equivalent to Magma Clust in the specific case where K = 1) is also evaluated to measure the accuracy gain we can achieve thanks to the additional clustering on the simulated group-structured data sets. As detailed in Section 1, the existing multi-task approaches in the GP literature generally consider specific kernel structures accounting for explicit corre- Leroy, Latouche, Guedj and Gey MSE WCIC95 Training time Prediction time GP 138 (174) 78.4 (31.1) 0 (0) 0.6 (0.1) SM LMC 29.9 (95.9) 97.2 (7) 1172.6 (300) 5.6 (0.6) MOSM 417 (646) 57.8 (45.2) 1148 (60) 7.5 (0.5) Magma 31.7 (45) 84.4 (27.9) 61.1 (25.7) 0.5 (0.2) Magma Clust 3.7 (8.1) 95 (13.2) 132 (55.6) 0.6 (0.2) Table 5: Average (sd) values of MSE, WCIC95, training and prediction times (in secs) for GP, SM LMC, MOSM, Magma and Magma Clust over 100 simulated testing sets. lations both between inputs and outputs ( Alvarez et al., 2012). The Multi-Output Gaussian Process Tool Kit (MOGPTK) (de Wolffet al., 2021) is a Python package that implements the main multi-output covariance kernels from literature in a unified framework. We relied on this package to run experiments, first applying the proposed combination (SM LCM) of the spectral mixture (SM) (Wilson and Adams, 2013) and the linear model of coregionalisation (LMC) (Goovaerts, 1997), which is the historical and very general formulation for multi-output kernels. Additionally, we made use of the more recent multi-Output spectral mixture (MOSM) algorithm (Parra and Tobar, 2017), which is defined as the default method in the package. For the sake of fair comparison, the L-BFGS-B algorithm (Nocedal, 1980; Morales and Nocedal, 2011) is the optimisation procedure used in all competing methods. Regarding both mean prediction and uncertainty quantification, our approach outperforms the alternatives. In terms of MSE, Magma Clust takes advantage of its multiple mean processes to results in an order of magnitude enhancement compared to the best competitors (Magma and SM LMC). As expected, the simple GP regression performs rather poorly and surprisingly MOSM results are even worse, as in practice, its current implementation seems to reach pathological cases during training most of the time. Additionally, the empirical quantification of uncertainty of Magma Clust appears very convincing since there are on average exactly 95% of the observations lying within the weighted CI95, as expected. In accordance with the theoretical complexity, the increase in training times displayed in Table 5 remains roughly proportional to the value of K (we recall that Magma Clust assumes K = 3 here, compared to Magma which is K = 1). In contrast, the multioutput GPs methods (SM LMC and MOSM) multiply tenfold both training and prediction times even in these reasonable experiments (50 individuals and 30 timestamps). This cost comes from the cubic complexity in the number of tasks (due to size-augmented covariance matrices) that results in a massive computational burden as M increases. As a consequence, in the following real data applications, where thousands of individuals are considered, these methods would merely be unusable in practice. 6.5 Application to real data sets To evaluate the efficiency of our approach in real-life applications, we introduce 3 distinct data sets corresponding to various contexts: sports performances, weight follow-up during Cluster-Specific Predictions with Multi-Task Gaussian Processes Women Men Mean WCIC95 Mean WCIC95 Magma 4.9 (6.6) 94.3 (14.2) 3 (3.3) 96.9 (9.3) Magma Clust K = 2 4.5 (6.2) 94.3 (14.1) 2.8 (3.2) 96.2 (10.1) Magma Clust K = 3 4.3 (6) 94.4 (14.1) 2.8 (3.1) 96.1 (10.4) Magma Clust K = 4 4.2 (5.9) 94.4 (14.1) 2.8 (3) 96.2 (10) Magma Clust K = 5 4.2 (5.9) 94.4 (14.1) 2.7 (3) 96.1 (10.1) Magma Clust K = 6 4.1 (5.7) 94.3 (14.1) 2.8 (3) 96.3 (9.8) Table 6: Average (sd) values of MSE and WCIC95 for Magma Clust with K = 1, . . . , 6 on the french swimmer testing data sets. childhood, and missing data reconstruction in CO2 emissions. The common aspect in all these problems lies in the presence of time series collected from multiple sources, possibly with irregular measurements, for which we expect to provide accurate forecasts by taking advantage of shared information and clustered structures in the data. For all experiments, the individuals (or countries for CO2) are split into training and testing sets (in proportions 60% 40%). In the absence of expert knowledge, the prior mean functions {mk( )}k are set to be constant equal to 0. The hypothesis H00 is specified along with random initialisations for the hyper-parameters. The hyper-parameters, the mean processes and the cluster s membership probabilities are learnt on the training data set. Then, the data points of each testing individual are split for evaluation purposes between observed (the first 60%) and testing values (the remaining 40%). Therefore, each new process y ( ) associated with a test individual is assumed to be partially observed, and its testing values are used to compute MSE and WCIC95 for the predictions. As measuring clustering performances directly for real-life applications is a vain effort (Kleinberg, 2002), the results are provided for several values of K, from K = 1 (i.e. Magma) to K = 6, to evaluate the effect of clustered-data structures on predictions. In all the following experiments, predictive performances tend to reach a plateau as we increase the number of clusters and no substantial improvement is noticeable for K 7. 6.5.1 Talent identification in competitive swimming As previously presented in Section 2.1, the 100m freestyle swimming data sets initially proposed in Leroy et al. (2018) and Leroy et al. (2022) is analysed below in the new light of Magma Clust. The data sets contain results from 1731 women and 7876 men, members of the French swimming federation, each of them compiling an average of 22.2 data points (min = 15, max = 61) and 12 data points (min = 5, max = 57) respectively. In the following, age of the i-th swimmer is considered as the input variable (timestamp t) and the performance (in seconds) on a 100m freestyle as the output (yi(t)). The analysis focuses on the youth period, from 10 to 20 years, where the progression is the most noticeable. Let us recall that we aim at modelling a curve of progression from competition results for each individual in order to forecast their future performances. We expect Magma Clust Leroy, Latouche, Guedj and Gey Figure 10: Top: women data set. Bottom: men data set. Heatmap of probabilities obtained through the GPs mixture predictive distribution of Magma Clust with K = 5 for a random illustrative swimmer. The dashed lines represent the mean parameters from the mean processes estimates. Observed data points are in black, testing data points are in yellow. Backward points are a sample of observations from the training data set, coloured according to their most probable cluster. Cluster-Specific Predictions with Multi-Task Gaussian Processes to provide relevant predictions by taking advantage of both its multi-task feature and the clustered structure of data, previously highlighted in Leroy et al. (2018). As exhibited by Table 6, Magma Clust offers excellent performances on both data sets and slightly improves Magma predictions, as we increase the number of clusters. Values of both MSE and WCIC95 appear satisfactory in all cases, and cluster-specific predictions provide additional accuracy though one may fairly argue that the gain remains moderate overall. One of the explaining reasons is highlighted in Figure 10, where we displayed illustrative predictions for a random man and woman when K = 5. Although we can notice clear distinctions between the different mean curves at young ages, these differences tend to decrease afterwards, as adults performances lie in narrow intervals, especially in regards to the overall signal-on-noise ratio. Nevertheless, Magma Clust provides several additional insights into this problem compared to Magma. First, the clusters offer interesting results to assess the profile of young swimmers and to determine the individuals to whom they most resemble. In particular, it is also possible to differentiate future evolutions associated with each cluster, along with their probabilities to occur (we do not display all the cluster-specific predictions here for the sake of concision). On the other hand, our method leads to tighter predictive distributions in terms of uncertainty. Compared to Magma which uses all training data identically, Magma Clust somehow discards the superfluous information, through the weights τ k, to only retain the most relevant cluster-specific mean processes. 6.5.2 Follow-up of children s weight in Singapore In contrast with the previous application, we now study individual time series that are almost similar at young ages before diverging while growing older. This data set (collected through the GUSTO program, see https://www.gusto.sg/) corresponds to a weight followup of 342 children from Singapore at 11 timestamps between birth and 72 months. In this experiment, the goal is to predict the weight of a child at {24, 36, 48, 60, 72} months, using its observed weight at {0, 3, 6, 9, 12, 18} months and data from the training individuals. Since the weight differences between toddlers are shallow until 18 months, providing accurate long-term forecasts, while clear morphology differences emerge, seems particularly challenging at first sight. However, Magma Clust still achieves impressive performances in this context as highlighted by Table 7. Once again, the accuracy of predictions seems to slightly improve as we increase the number of clusters. In this application, recovering an adequate cluster for a child is essential to anticipate which future weight pattern is the most likely, and assuming that more clusters exist appears to help in identifying specific patterns more precisely. This being said, each new cluster increases the overall complexity of the model and whether it is worth adding one cluster regarding the relative gain in accuracy remains a practitioner s choice. For instance, the VBIC measure we proposed in Section 3.4 indicates K = 3 as the optimal number of clusters for this data set (keep in mind that this criterion maximises a penalised ELBO and tells us nothing regarding the predicting abilities). Using this value for K allows us to display on Figure 11 the behaviour of Magma Clust predictions for a random testing child. Notice that the prediction remains nicely accurate all along even though the testing points (in yellow) are not close to any of the Leroy, Latouche, Guedj and Gey MSE WCIC95 Magma 5.35 (9.48) 93.3 (16.7) Magma Clust K = 2 4.94 (8.98) 95.1 (15.2) Magma Clust K = 3 5.01 (9.67) 94.7 (16) Magma Clust K = 4 4.90 (9) 94.8 (16.1) Magma Clust K = 5 4.95 (9.18) 94.7 (16.1) Magma Clust K = 6 4.85 (9.8) 94.9 (16.1) Table 7: Average (sd) values of MSE and WCIC95 for Magma Clust with K = 1, . . . , 6 on the children s weight testing data set. Figure 11: Heatmap of probabilities obtained through the GPs mixture predictive distribution of Magma Clust with K = 5 for a random illustrative swimmer. The dashed lines represent the mean parameters from the mean processes estimates. Observed data points are in black, testing data points are in yellow. Backward points are observations from the training data set, coloured according to their most probable cluster. mean processes. This particularity comes from the learning of cluster weights τ k at young ages, where the algorithm identifies that this child belonged in nearly equal proportions to the clusters K1 (in red) and K2 (in blue). Therefore, the multi-task GPs mixture posterior distribution defined in Proposition 6 defines a weighted trade-offbetween the two mean processes that remarkably predicts the true weight values of the child for almost 5 years. Although this nice behaviour on a single example does prevent pathological cases to occur Cluster-Specific Predictions with Multi-Task Gaussian Processes Figure 12: Time series representing the evolution of CO2 emissions per capita for an illustrative sample of 16 countries, differentiated by colors. in general, we know from Table 7 that those remain rare in practice, and that the computed credible intervals encompass true data with the correct degree of uncertainty. 6.5.3 Reconstructing missing data in CO2 emissions For this last application, we propose to use Magma Clust to tackle a different kind of problem, namely, missing data reconstruction. Contrarily to the previous data sets focused on forecasting, the present collection of time series consists in historical measurements of CO2 emissions per capita for each country from 1750 to 2020 (freely available at https://github.com/owid/co2-data). Naturally, most countries in the world have not collected such data regularly and many annual observations are missing, especially as we move back in the past (for instance, only Canada and the United Kingdom reported values before 1800). However, our method seems particularly well-suited to recover probable distributions of historical CO2 emissions by exploiting similarities between countries and transferring knowledge from the densely observed time series to those that remain sparse. To illustrate this ambition, we provide on Figure 12 a visualisation of the raw observations for the first 16 countries appearing by alphabetical order in the data set (for the sake of clarity, we cannot display all countries at once). While studying per capita quantities allows us to compare countries with different populations, we can observe that CO2 emissions can massively differ around the world, and so does the regularity of measurements. By simple inspection, some patterns seem to emerge for countries with similar topographical properties (oil and gas producing regions) or similar lifestyles (typically depending on the average wealth of populations). By applying Magma Clust on this data set, we have noticed its Leroy, Latouche, Guedj and Gey Mean WCIC95 Magma 34.9 (89.5) 92.9 (21.6) Magma Clust K = 2 28.9 (62.3) 90.8 (24) Magma Clust K = 3 19.6 (49.1) 92.5 (18.3) Magma Clust K = 4 15.4 (33.4) 93.7 (17.8) Magma Clust K = 5 14 (28.7) 94.1 (17.1) Magma Clust K = 6 14.2 (29.3) 93.4 (18) Table 8: Average (sd) values of MSE and WCIC95 for Magma Clust with K = 1, . . . , 6 on the CO2 emissions testing data set. ability to automatically recover patterns that seem logical (although such a thing remains highly subjective). For instance, when setting K = 5, one cluster of 11 countries gathered United Kingdom, United States, Russia, France and Canada among others. Another cluster only counted Kuwait and Qatar as members, while the largest regrouped around 90 countries mainly from Africa, South Asia and South America, for which the CO2 emissions per capita have generally been really low during the past centuries. More generally, we reported the prediction performances on this data set in Table 8. Notice that increasing the number of clusters dramatically improves accuracy in this context, as it seems clear on Figure 12 that some countries present atypical patterns and should be treated in separated clusters to reach satisfying predictions of the missing values. Overall, these abilities to take advantage of group-structured data, even with irregular measurements, and to provide probabilistic statements, highlight once more the interest of Magma Clust to tackle various machine learning problems both in supervised and unsupervised contexts. 7. Discussion We introduced a novel framework to handle clustering and regression purposes with a multitask GPs mixture model. This approach, called Magma Clust, extends the previous algorithm Magma (Leroy et al., 2022) to deal with group-structured data more efficiently. The method provides new insights on the matter of GP regression by introducing clusterspecific modelling and predictions while remaining efficiently tractable through the use of variational approximations for inference. Moreover, this nonparametric probabilistic framework accounts for uncertainty both in regards to the clustering and predictive aspects, which appears to be notable in the machine learning literature. We demonstrated the practical efficiency of Magma Clust on both synthetic and real data sets where it outperformed the alternatives, particularly in group-structured context. Even though the main concern of our method remains the predictive abilities, the clustering performances also deserve to be highlighted, compared to state-of-the-art functional clustering algorithms. While we recall that computational cost is of paramount importance to ensure broad applicability of GP models, the present version of Magma Clust yet lacks a sparse approximation, which is not trivial to derive in this framework. However, one of the state-of-the-art Cluster-Specific Predictions with Multi-Task Gaussian Processes sparse methods (Titsias, 2009; Bauer et al., 2016) also makes use of variational inference, both to select pseudo-inputs and learn hyper-parameters of GP models. Therefore, an interesting extension could come from simultaneously computing {µk( )}k s hyper-posteriors and pseudo-inputs in Magma Clust, allowing for a sparse approximation of the highest dimensional objects in our model. Besides, several additional features would be worth investigating in future work, such as the extension to non-Gaussian likelihoods or enabling online updates in the learning procedure. As a prerequisite, let us introduce an intermediate result that will be used several times in the proofs below. Lemma 7 Let X RN be a random Gaussian vector X N (m, K), where EX denotes the expectation and VX the variance with respect to this distribution. Additionally, let b RN be an arbitrary vector and S a N N covariance matrix. Then: EX (X b) S 1(X b) = (m b) S 1(m b) + tr KS 1 . EX (X b) S 1(X b) = EX tr S 1(X b)(X b) = tr S 1(m b)(m b) + tr S 1VX [ X ] = (m b) S 1(m b) + tr KS 1 . 8.1 Proof of Proposition 1 Throughout, we note Eµ the expectation with respect to the variational distribution bqµ(µ). From Bishop (2006, Chapter 10), the optimal solution bq Z(Z) to the variational formulation verifies: log bq Z(Z) = Eµ h log p({yi}i , Z, µ | bΘ) i + C1 = Eµ h log p({yi}i | Z, µ, n bθi o i) + log p(Z | bπ) + log p(µ | {bγk}k) i + C1 = Eµ h log p({yi}i | Z, µ, n bθi o i) i + log p(Z | bπ) + C2 k=1 Zik log p(yi | Zik = 1, µk(ti), bθi, bσ2 i ) k=1 Zik log(bπk) + C2 k=1 Zik h log(bπk) + Eµk h log p(yi | Zik = 1, µk(ti), bθi, bσ2 i ) i i + C2 Leroy, Latouche, Guedj and Gey 2 log Ψti bθi,bσ2 i (yi µk(ti)) Ψti bθi,bσ2 i 1(yi µk(ti)) # Applying Lemma 7 to the expectation in (10), we obtain: log bq Z(Z) = log Ψti bθi,bσ2 i + (yi bmk(ti)) Ψti bθi,bσ2 i 1 (yi bmk(ti)) 2tr b Cti k Ψti bθi,bσ2 i k=1 Zik [ log τik ] where (by inspection of both Gaussian and multinomial distributions): τik = bπk N yi; bmk(ti), Ψti bθi,bσ2 i 2tr b Cti k Ψti bθi,bσ2 i l=1 bπl N yi; bml(ti), Ψti bθi,bσ2 i 2tr b Cti l Ψti bθi,bσ2 i 1 , i I, k K. Therefore, the optimal solution may be written as a factorised form of multinomial distributions: i=1 M (Zi; 1, τi = (τi1, . . . , τi K) ) . 8.2 Proof of Proposition 2 Let EZ denote by the expectation with respect to the variational distribution bq Z(Z). From Bishop (2006, Chapter 10), the optimal solution bqµ(µ) to the variational formulation verifies: log bqµ(µ) = EZ h log p({yi}i , Z, µ | bΘ) i + C1 = EZ h log p({yi}i | Z, µ, n bθi o i) + log p(Z | bπ) + log p(µ | {bγk}k) i + C1 = EZ h log p({yi}i | Z, µ, n bθi o i) i + log p(µ | {bγk}k) + C2 i=1 EZi h log p(yi | Zi, µ, bθi, bσ2 i ) i + k=1 log p(µk(t) | bγk) + C2 k=1 EZi [ Zik ] log p(yi | Zik = 1, µk(ti), bθi, bσ2 i ) + k=1 log p(µk(t) | bγk) + C2 (µk(t) mk(t)) Ct bγk 1(µk(t) mk(t)) Cluster-Specific Predictions with Multi-Task Gaussian Processes i=1 τik(yi µk(ti)) Ψti bθi,bσ2 i 1(yi µk(ti)) If we regroup the scalar coefficient τik with the covariance matrix Ψti bθi,bσ2 i ply recognise two quadratic terms of Gaussian likelihoods on the variables µk( ), although evaluated onto different sets of timestamps t and ti. By taking some writing cautions and expanding the vector-matrix products entirely, it has been proved in Leroy et al. (2022) that this expression factorises with respect to µk(t) simply by expanding vectors yi and matrices Ψti bθi,bσ2 i with zeros, t t, t / ti. Namely, we can notice that: i I, yi = 1[t ti] yi(t) t t, a N-dimensional vector, i I, Ψi = h 1[t,t ti] ψbθi,bσ2 i (t, t ) i t,t t, a N N matrix. log bqµ(µ) = 1 i=1 τik Ψ 1 i Ct bγk 1mk(t) + i=1 τik Ψ 1 i yi By inspection, we recognise a sum of a Gaussian log-likelihoods, which implies the underlying values of the constants. Finally: k=1 N µk(t); bmk(t), b Ct k , (11) b Ct k = Ct bγk 1 + M P i=1 τik Ψ 1 i bmk(t) = b Ct k Ct bγk 1mk(t) + M P i=1 τik Ψ 1 i yi 8.3 Proof of Proposition 3 Let EZ,µ be the expectation with respect to the optimised variational distributions bq Z(Z) and bqµ(µ). From Bishop (2006, Chapter 10), we can figure out the optimal values for the hyper-parameters Θ by maximising the lower bound L(bq; Θ) with respect to Θ: bΘ = argmax Θ L(bq; Θ). Moreover, we can develop the formulation of the lower bound by expressing the integrals as an expectation, namely EZ,µ. Recalling the complete-data likelihood analytical expression Leroy, Latouche, Guedj and Gey and focusing on quantities depending upon Θ, we can write: L(bq; Θ) = E{Z,µ} log bq Z,µ(Z, µ) | {z } constant w.r.t. Θ log p({yi}i , Z, µ | Θ) N µk(t); mk(t), Ct γk M Y πk N yi; µk(ti), Ψti θi,σ2 i log Ct γk + Eµ h (µk(t) mk(t)) Ct γk 1(µk(t) mk(t)) i i=1 E{Z,µ} h Zik log Ψti θi,σ2 i + (yi µk(ti)) Ψti θi,σ2 i 1(yi µk(ti)) i i=1 EZ [ Zik ] log πk log Ct γk + ( bmk(t) mk(t)) Ct γk 1( bmk(t) mk(t)) + tr b Ct k Ct γk 1 i=1 τik log Ψti θi,σ2 i + (yi bmk(ti)) Ψti θi,σ2 i 1(yi bmk(ti)) + tr b Ct kΨti θi,σ2 i i=1 τik log πk where we made use of Lemma 7 twice, at the first and second lines for the last equality. By reorganising the terms on the second line, we can derive another formulation of this lower bound that allows for better managing of the computational resources. For information, we give this expression below since it is the quantity implemented in the current version of the Magma Clust code: L(bq; Θ) = 1 log Ct γk 1 + ( bmk(t) mk(t)) Ct γk 1( bmk(t) mk(t)) + tr b Ct k Ct γk 1 # log Ψti θi,σ2 i + y i Ψti θi,σ2 i 1yi 2y i Ψti θi,σ2 i k=1 τik bmk(ti) Ψti θi,σ2 i k=1 τik bmk(ti) bmk(ti) + b Ct k ! # i=1 τik (log πk) + C2. Regardless of the expression we choose for the following, we can notice that we expressed the lower bound L(q; Θ) as a sum where the hyper-parameters {γk}k, {{θi}i , σ2 i i} and π appear in separate terms. Hence, the resulting maximisation procedures are independent Cluster-Specific Predictions with Multi-Task Gaussian Processes of each other. First, we focus on the simplest term that concerns π, for which we have an analytical update equation. Since there is a constraint on the sum K P k=1 πk = 1, we first need to introduce a Lagrange multiplier in the expression to maximise: + L(q; Θ). (12) Setting to 0 the gradient with respect to πk in (12), we get: i=1 τik = 0, k K. Multiplying by πk and summing over k, we deduce the value of λ: Therefore, the optimal values for πk are expressed as: i=1 τik, k K. (13) Concerning the remaining hyper-parameters, in the absence of analytical optima, we have no choice but to numerically maximise the corresponding terms in L(bq; Θ), namely: log Ct γk 1 + ( bmk(t) mk(t)) Ct γk 1( bmk(t) mk(t)) + tr b Ct k Ct γk 1 , (14) k=1 τik log Ψti θi,σ2 i + (yi bmk(ti)) Ψti θi,σ2 i 1(yi bmk(ti)) + tr b Ct kΨti θi,σ2 i (15) It is straightforward to see that some manipulations of linear algebra also allows the derivation of explicit gradients with respect to {γk}k, {θi}i and σ2 i i. Hence, we may take advantage of efficient gradient-based methods to handle the optimisation process. We should stress that the quantity (14) is a sum on the sole values of k, whereas (15) also implies a sum on the values of i. Hence, each term of these sums involves only one hyper-parameter at a time, which thus may be optimised apart from the others. Conversely, if we assume all individuals (respectively all clusters) to share the same set of hyper-parameters, then the full sum has to be maximised upon at once. Therefore, recalling that we introduced 4 different settings according to whether we consider common or specific hyper-parameters for both clusters and individuals, we shall notice the desired maximisation problems that are induced by (14) and (15). Leroy, Latouche, Guedj and Gey 8.4 Proof of Proposition 4 Let us reconsider the expression of L(bq; Θ) from the previous proof. As the model selection procedure takes place after convergence of the learning step, we can use the optimal variational approximation bq Z,µ to compute the lower bound explicitly. Contrarily to the M step though, we now need to develop its full expression, and thus make use of Lemma 7 three times. L(bq; Θ) = E{Z,µ} [ log p({yi}i , Z, µ | Θ) log bq Z,µ(Z, µ) ] = E{Z,µ} log p({yi}i | Z, µ, {θi}i , σ2 i i) + EZ [ log p (Z | π)) ] + Eµ [ log p (µ | {γk}k)) ] EZ [ log bq Z(Z) ] Eµ [ log bqµ(µ) ] log N yi; bmk(ti)), Ψti θi,σ2 i 2tr b Ct kΨti θi,σ2 i k=1 {τik log πk} + log N bmk(t); mk(t), Ct γk 1 2tr b Ct k Ct γk 1 k=1 {τik log τik} log N bmk(t); bmk(t), Ct γk 1 2tr b Ct k b Ct k 1 log N yi; bmk(ti), Ψti θi,σ2 i 2tr b Ct kΨti θi,σ2 i log N bmk(t); mk(t), Ct γk 1 2tr b Ct k Ct γk 1 + 1 2 log b Ct k + N log 2π + N log N yi; bmk(ti), Ψti θi,σ2 i 2tr b Ct kΨti θi,σ2 i log N bmk(t); mk(t), Ct γk 1 2tr b Ct k Ct γk 1 + 1 2 log b Ct k + N log 2π + N . The result follows by considering the analogous expression L(bq; bΘ) in which the hyperparameters are evaluated at their optimal value. Acknowledgments Significant parts of this work have been carried out while Arthur Leroy was affiliated with MAP5, Universit e Paris Cit e, CNRS, UMR 8145, and Department of Computer Science, The University of Sheffield. The authors warmly thank the French Swimming Federation Cluster-Specific Predictions with Multi-Task Gaussian Processes for collecting data and sharing insights on the analysis, as well as Ai Ling Teh and Dennis Wang for providing data from the GUSTO project. The study is supported by the National Research Foundation (NRF) under the Open Fund-Large Collaborative Grant (OF-LCG; MOH-000504) administered by the Singapore Ministry of Health s National Medical Research Council (NMRC) and the Agency for Science, Technology and Research (A*STAR). In RIE2025, GUSTO is supported by funding from the NRF s Human Health and Potential (HHP) Domain, under the Human Potential Programme. Benjamin Guedj acknowledges partial support by the U.S. Army Research Laboratory and the U.S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (grant EP/R013616/1), and by the French National Agency for Research (grants ANR-18-CE40-0016-01 & ANR-18-CE23-0015-02). Data availability The synthetic data, trained models and results are available at https://github.com/ Arthur Leroy/MAGMAclust/tree/master/Simulations. The real data sets, associated trained models and results are available at https://github.com/Arthur Leroy/MAGMAclust/tree/ master/Real_Data_Study. Code availability The current version of the R package implementing Magma Clust is available on the CRAN and at https://github.com/Arthur Leroy/Magma Clust R. C. Abraham, P. A. Cornillon, E. Matzner-Løber, and N. Molinari. Unsupervised Curve Clustering using B-Splines. Scandinavian Journal of Statistics, 30(3):581 595, Sept. 2003. ISSN 1467-9469. doi: 10.1111/1467-9469.00350. H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6):716 723, Dec. 1974. ISSN 1558-2523. doi: 10.1109/TAC.1974. 1100705. M. A. Alvarez and N. D. Lawrence. Computationally Efficient Convolved Multiple Output Gaussian Processes. Journal of Machine Learning Research, 12(41):1459 1500, 2011. ISSN 1533-7928. M. A. Alvarez, L. Rosasco, and N. D. Lawrence. Kernels for Vector-Valued Functions: A Review. Foundations and Trends in Machine Learning, 4(3):195 266, 2012. ISSN 1935-8237, 1935-8245. doi: 10.1561/2200000036. H. Attias. A variational baysian framework for graphical models. In S. A. Solla, T. K. Leen, and K. M uller, editors, Advances in Neural Information Processing Systems 12, pages 209 215. MIT Press, 2000. M. Bauer, M. van der Wilk, and C. E. Rasmussen. Understanding probabilistic sparse gaussian process approximations. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, Leroy, Latouche, Guedj and Gey and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1533 1541. Curran Associates, Inc., 2016. Y. Bengio. Gradient-Based Optimization of Hyperparameters. Neural Computation, 12(8): 1889 1900, Aug. 2000. ISSN 0899-7667. doi: 10.1162/089976600300015187. J. Bernardo, J. Berger, A. Dawid, and A. Smith. Regression and classification using Gaussian process priors. Bayesian statistics, 6:475, 1998. C. Biernacki, G. Celeux, and G. Govaert. Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models. Computational Statistics & Data Analysis, 41(3):561 575, 2003. ISSN 0167-9473. doi: 10.1016/S0167-9473(02)00163-9. C. M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, New York, 2006. ISBN 978-0-387-31073-2. E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task gaussian process prediction. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 153 160. Curran Associates, Inc., 2008. C. Bouveyron and J. Jacques. Model-based clustering of time series in group-specific functional subspaces. Advances in Data Analysis and Classification, 5(4):281 300, Dec. 2011. ISSN 1862-5355. doi: 10.1007/s11634-011-0095-6. C. Bouveyron, E. Cˆome, and J. Jacques. The discriminative functional mixture model for a comparative analysis of bike sharing systems. The Annals of Applied Statistics, 9(4): 1726 1760, 2015. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004. ISBN 978-0-521-83378-3. doi: 10.1017/CBO9780511804441. R. Caruana. Multitask Learning. Machine Learning, 28(1):41 75, July 1997. ISSN 15730565. doi: 10.1023/A:1007379606734. K. Chen, T. van Laarhoven, P. Groot, J. Chen, and E. Marchiori. Generalized Convolution Spectral Mixture for Multitask Gaussian Processes. IEEE transactions on neural networks and learning systems, Apr. 2020. ISSN 2162-2388. doi: 10.1109/TNNLS.2020.2980779. V. Cohen-Addad, B. Guedj, V. Kanade, and G. Rom. Online k-means clustering. In A. Banerjee and K. Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics [AISTATS], volume 130 of Proceedings of Machine Learning Research, pages 1126 1134. PMLR, April 2021. T. de Wolff, A. Cuevas, and F. Tobar. MOGPTK: The multi-output Gaussian process toolkit. Neurocomputing, 424:49 53, Feb. 2021. ISSN 0925-2312. doi: 10.1016/j.neucom. 2020.09.085. D. Duvenaud. Automatic Model Construction with Gaussian Processes. Thesis, University of Cambridge, Nov. 2014. Cluster-Specific Predictions with Multi-Task Gaussian Processes M. Giacofci, S. Lambert-Lacroix, G. Marot, and F. Picard. Wavelet-Based Clustering for Mixed-Effects Functional Models in High Dimension. Biometrics, 69(1):31 40, 2013. ISSN 1541-0420. doi: 10.1111/j.1541-0420.2012.01828.x. P. Goovaerts. Geostatistics for natural resources evaluation. Oxford University Press on Demand, 1997. K. Hayashi, T. Takenouchi, R. Tomioka, and H. Kashima. Self-measuring Similarity for Multi-task Gaussian Process. Transactions of the Japanese Society for Artificial Intelligence, 27(3):103 110, 2012. ISSN 1346-8030, 1346-0714. doi: 10.1527/tjsai.27.103. J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian processes for big data. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI 13, page 282 290, Arlington, Virginia, USA, 2013a. AUAI Press. J. Hensman, N. D. Lawrence, and M. Rattray. Hierarchical bayesian modelling of gene expression time series across irregularly sampled replicates and clusters. BMC bioinformatics, 14(1):1 12, 2013b. M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of research of the National Bureau of Standards, 49(6):409 436, 1952. L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2(1):193 218, Dec. 1985. ISSN 1432-1343. doi: 10.1007/BF01908075. J. Jacques and C. Preda. Funclust: A curves clustering method using functional random variables density approximation. Neurocomputing, 112:164 171, July 2013. ISSN 09252312. doi: 10.1016/j.neucom.2012.11.042. J. Jacques and C. Preda. Functional data clustering: A survey. Advances in Data Analysis and Classification, 8(3):231 255, Sept. 2014. ISSN 1862-5347, 1862-5355. doi: 10.1007/ s11634-013-0158-y. H. Jiang and N. Serban. Clustering Random Curves Under Spatial Interdependence With Application to Service Accessibility. Technometrics, 54(2):108 119, May 2012. ISSN 0040-1706, 1537-2723. doi: 10.1080/00401706.2012.657106. J. Kleinberg. An impossibility theorem for clustering. Advances in neural information processing systems, 15, 2002. A. Leroy, A. Marc, O. Dupas, J. L. Rey, and S. Gey. Functional Data Analysis in Sport Science: Example of Swimmers Progression Curves Clustering. Applied Sciences, 8(10): 1766, Oct. 2018. doi: 10.3390/app8101766. A. Leroy, P. Latouche, B. Guedj, and S. Gey. Magma: inference and prediction using multi-task gaussian processes with common mean. Machine Learning, 111(5):1821 1849, 2022. L. Li and B. Guedj. Sequential learning of principal curves: Summarizing data streams on the fly. Entropy, 23(11), 2021. ISSN 1099-4300. doi: 10.3390/e23111534. Leroy, Latouche, Guedj and Gey L. Li, B. Guedj, and S. Loustau. A quasi-Bayesian perspective to online clustering. Electronic Journal of Statistics, 12(2):3071 3113, 2018. ISSN 1935-7524. doi: 10.1214/18-EJS1479. J. L. Morales and J. Nocedal. Remark on algorithm L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization. ACM Transactions on Mathematical Software, 38(1):7:1 7:4, Dec. 2011. ISSN 0098-3500. doi: 10.1145/2049662.2049669. R. M. Neal. Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification. ar Xiv:physics/9701026, Jan. 1997. J. Nocedal. Updating quasi-Newton matrices with limited storage. Mathematics of Computation, 35(151):773 782, 1980. ISSN 0025-5718, 1088-6842. doi: 10.1090/ S0025-5718-1980-0572855-7. G. Parra and F. Tobar. Spectral mixture kernels for multi-output gaussian processes. Advances in Neural Information Processing Systems, 30, 2017. J. Qui nonero-Candela, C. E. Rasmussen, and C. K. I. Williams. Approximation Methods for Gaussian Process Regression. MIT Press, 2007. ISBN 978-0-262-02625-3. B. Rakitsch, C. Lippert, K. Borgwardt, and O. Stegle. It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1466 1474. Curran Associates, Inc., 2013. J. O. Ramsay and B. W. Silverman. Functional Data Analysis. Springer, 2005. C. E. Rasmussen and H. Nickisch. Gaussian processes for machine learning (GPML) toolbox. The Journal of Machine Learning Research, 11:3011 3015, 2010. C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge, Mass, 2006. ISBN 978-0262-18253-9. A. Schmutz, J. Jacques, C. Bouveyron, L. Cheze, and P. Martin. Clustering multivariate functional data in group-specific functional subspaces. Computational Statistics, 35(3): 1101 1131, 2020. A. Schwaighofer, V. Tresp, and K. Yu. Learning gaussian process kernels via hierarchical bayes. In Advances in neural information processing systems, volume 17, 2004. G. Schwarz. Estimating the Dimension of a Model. Annals of Statistics, 6(2):461 464, Mar. 1978. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176344136. J. Shi, R. Murray-Smith, and D. Titterington. Hierarchical Gaussian process mixtures for regression. Statistics and Computing, 15(1):31 41, 2005. ISSN 0960-3174, 1573-1375. doi: 10.1007/s11222-005-4787-7. Cluster-Specific Predictions with Multi-Task Gaussian Processes J. Q. Shi and B. Wang. Curve prediction and clustering with mixtures of Gaussian process functional regression models. Statistics and Computing, 18(3):267 283, 2008. ISSN 09603174, 1573-1375. doi: 10.1007/s11222-008-9055-1. J. Q. Shi, B. Wang, R. Murray-Smith, and D. M. Titterington. Gaussian Process Functional Regression Modeling for Batch Data. Biometrics, 63(3):714 723, 2007. ISSN 1541-0420. doi: 10.1111/j.1541-0420.2007.00758.x. E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. Advances in neural information processing systems, 18, 2005. K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2004 2012. Curran Associates, Inc., 2013. M. Titsias. Variational learning of inducing variables in sparse gaussian processes. In Artificial intelligence and statistics, pages 567 574. PMLR, 2009. N. Ueda and R. Nakano. Deterministic annealing EM algorithm. Neural Networks, 11(2): 271 282, Mar. 1998. ISSN 0893-6080. doi: 10.1016/S0893-6080(97)00133-0. J. Vanhatalo, J. Riihim aki, J. Hartikainen, P. Jyl anki, V. Tolvanen, and A. Vehtari. GPstuff: Bayesian Modeling with Gaussian Processes. Journal of Machine Learning Research, 14 (Apr):1175 1179, 2013. ISSN ISSN 1533-7928. D. L. Weakliem. A Critique of the Bayesian Information Criterion for Model Selection:. Sociological Methods & Research, June 2016. doi: 10.1177/0049124199027003002. A. Wilson and R. Adams. Gaussian process kernels for pattern discovery and extrapolation. In S. Dasgupta and D. Mc Allester, editors, Proceedings of Machine Learning Research, volume 28, pages 1067 1075, Atlanta, Georgia, USA, 17 19 Jun 2013. PMLR. J. Yang, H. Zhu, T. Choi, and D. D. Cox. Smoothing and Mean Covariance Estimation of Functional Data with a Bayesian Hierarchical Model. Bayesian Analysis, 11(3):649 670, 2016. ISSN 1936-0975, 1931-6690. doi: 10.1214/15-BA967. C. You, J. T. Ormerod, and S. M uller. On Variational Bayes Estimation and Variational Information Criteria for Linear Regression Models. Australian & New Zealand Journal of Statistics, 56(1):73 87, 2014. ISSN 1467-842X. doi: 10.1111/anzs.12063. K. Yu, V. Tresp, and A. Schwaighofer. Learning gaussian processes from multiple tasks. In Proceedings of the 22Nd International Conference on Machine Learning, pages 1012 1019, 2005. doi: 10.1145/1102351.1102479.