# multiway_clustering_via_tensor_block_models__b659e7f0.pdf Multiway clustering via tensor block models Miaoyan Wang University of Wisconsin Madison miaoyan.wang@wisc.edu Yuchen Zeng University of Wisconsin Madison yzeng58@wisc.edu We consider the problem of identifying multiway block structure from a large noisy tensor. Such problems arise frequently in applications such as genomics, recommendation system, topic modeling, and sensor network localization. We propose a tensor block model, develop a unified least-square estimation, and obtain the theoretical accuracy guarantees for multiway clustering. The statistical convergence of the estimator is established, and we show that the associated clustering procedure achieves partition consistency. A sparse regularization is further developed for identifying important blocks with elevated means. The proposal handles a broad range of data types, including binary, continuous, and hybrid observations. Through simulation and application to two real datasets, we demonstrate the outperformance of our approach over previous methods. 1 Introduction Higher-order tensors have recently attracted increased attention in data-intensive fields such as neuroscience [1], social networks [2], computer vision [3], and genomics [4, 5]. In many applications, the data tensors are often expected to have underlying block structure. One example is multi-tissue expression data [4], in which genome-wide expression profiles are collected from different tissues in a number of individuals. There may be groups of genes similarly expressed in subsets of tissues and individuals; mathematically, this implies an underlying three-way block structure in the data tensor. In a different context, block structure may emerge in a binary-valued tensor. Examples include multilayer network data [2], with the nodes representing the individuals and the layers representing the multiple types of relations. Here a planted block represents a community of individuals that are highly connected within a class of relationships. Input Output Input Output Figure 1: Examples of tensor block model (TBM). (a) Our TBM method is used for multiway clustering and for revealing the underlying checkerbox structure in a noisy tensor. (b) The sparse TBM method is used for detecting sub-tensors of elevated means. This paper presents a new method and the associated theory for tensors with block structure. We develop a unified least-square estimation procedure for identifying multiway block structure. The proposal applies to a broad range of data types, including binary, continuous, and hybrid observations. We establish a high-probability error bound for the resulting estimator, and show that the procedure enjoys consistency guarantees on the block structure recovery as the dimension of the data tensor grows. Furthermore, we develop a sparse extension of the tensor block model for block selections. 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. Figure 1 shows two immediate examples of our method. When the data tensor possesses a checkerbox pattern modulo some unknown reordering of entries, our method amounts to multiway clustering that simultaneously clusters each mode of the tensor (Figure 1a). When the data tensor has no full checkerbox structure but contains a small numbers of sub-tensors of elevated means, we develop a sparse version of our method to detect these sub-tensors of interest (Figure 1b). Related work. Our work is closely related to, but also clearly distinctive from, the low-rank tensor decomposition. A number of methods have been developed for low-rank tensor estimation, including CANDECOMP/PARAFAC (CP) decomposition [6] and Tucker decomposition [7]. The CP model decomposes a tensor into a sum of rank-1 tensors, whereas Tucker model decomposes a tensor into a core tensor multiplied by orthogonal matrices in each mode. In this paper we investigate an alternative block structure assumption, which has yet to be studied for higher-order tensors. Note that a block structure automatically implies low-rankness. However, as we will show in Section 4, a direct application of low rank estimation to the current setting will result in an inferior estimator. Therefore, a full exploitation of the block structure is necessary; this is the focus of the current paper. Our work is also connected to biclustering [8] and its higher-order extensions [9, 10]. Existing multiway clustering methods [9, 10, 5, 11] typically take a two-step procedure, by first estimating a low-dimension representation of the data tensor and then applying clustering algorithms to the tensor factors. In contrast, our tensor block model takes a single shot to perform estimation and clustering simultaneously. This approach achieves a higher accuracy and an improved interpretability. Moreover, earlier solutions to multiway clustering [12, 9] focus on the algorithm effectiveness, leaving the statistical optimality of the estimators unaddressed. Very recently, Chi et al [13] provides an attempt to study the statistical properties of the tensor block model. We will show that our estimator obtains a faster convergence rate than theirs, and the power is further boosted with a sparse regularity. 2 Preliminaries We begin by reviewing a few basic factors about tensors [14]. We use Y = Jyi1,...,i KK Rd1 d K to denote an order-K (d1, . . . , d K)-dimensional tensor. The multilinear multiplication of a tensor Y Rd1 d K by matrices Mk = Jm(k) ik,jk K Rsk dk is defined as Y 1 M1 . . . K MK = J X i1,...,i K yi1,...,i Km(1) i1,j1 . . . m(K) i K,j KK, which results in an order-K tensor (s1, . . . , s K)-dimensional tensor. For any two tensors Y = Jyi1,...,i KK, Y = Jy i1,...,i KK of identical order and dimensions, their inner product is defined as Y, Y = P i1,...,i K yi1,...,i Ky i1,...,i K. The Frobenius norm of tensor Y is defined as Y F = Y, Y 1/2; it is the Euclidean norm of Y regarded as an Q k dk-dimensional vector. The maximum norm of tensor Y is defined as Y max = maxi1,...,i K |yi1,...,i K|. An order-(K-1) slice of Y is a sub-tensor of Y obtained by holding the index in one mode fixed while letting other indices vary. A clustering of d objects is a partition of the index set [d] := {1, 2, . . . , d} into R disjoint non-empty subsets. We refer to the number of clusters, R, as the clustering size. Equivalently, the clustering (or partition) can be represented using the membership matrix . A membership matrix M RR d is an incidence matrix whose (i, j)-entry is 1 if and only if the element j belongs to the cluster i, and 0 otherwise. Throughout the paper, we will use the terms clustering , partition , and membership matrix exchangeably. For a higher-order tensor, the concept of index partition applies to each of the modes. A block is a sub-tensor induced by the index partitions along each of the K modes. We use the term cluster to refer to the marginal partition on mode k, and reserve the term block for the multiway partition of the tensor. 3 Tensor block model Let Y = Jyi1,...,i KK Rd1 d K denote an order-K, (d1, . . . , d K)-dimensional data tensor. The main assumption of tensor block model (TBM) is that the observed data tensor Y is a noisy realization of an underlying tensor that exhibits a checkerbox structure (see Figure 1a). Specifically, suppose that the k-th mode of the tensor consists of Rk clusters. If the tensor entry yi1,...,i K belongs to the block determined by the rkth cluster in the mode k for rk [Rk], then we assume that yi1,...,i K = cr1,...,r K + εi1,...,i K, for (i1, . . . , i K) [d1] [d K], (1) where cr1,...,r K is the mean of the tensor block indexed by (r1, . . . , r K), and εi1,...,i K s are independent, mean-zero noise terms to be specified later. Our goal is to (i) find the clustering along each of the modes, and (ii) estimate the block means {cr1,...,r K}, such that a corresponding blockwise-constant checkerbox structure emerges in the data tensor. The tensor block model (1) falls into a general class of non-overlapping, constant-mean clustering models [15], in that each tensor entry belongs to exactly one block with a common mean. The TBM can be equivalently expressed as a special tensor Tucker model, Y = C 1 M1 2 K MK + E, (2) where C = Jcr1,...,r KK RR1 RK is a core tensor consisting of block means, Mk {0, 1}dk Rk is a membership matrix indicating the block allocations along mode k for k [K], and E = Jεi1,...,i KK is the noise tensor. We view the TBM (2) as a super-sparse Tucker model, in the sense that the each column of Mk consists of one copy of 1 s and massive 0 s. We make a general assumption on the noise tensor E. The noise terms εi1,...,i K s are assumed to be independent, mean-zero σ-subgaussian, where σ > 0 is the subgaussianity parameter. Precisely, Eeλεi1,...,i K eλ2σ2/2, for all (i1, . . . , i K) [d1] [d K] and all λ R. (3) Th assumption (3) incorporates common situations such as Gaussian noise, Bernoulli noise, and noise with bounded support. In particular, we consider two important examples of the TBM: Example 1 (Gaussian tensor block model) Let Y be a continuous-valued tensor. The Gaussian tensor block model (GTBM) yi1,...,i K i.i.d. N(cr1,...,r K, σ2) is a special case of model (1), with the subgaussianity parameter σ equal to the error variance. The GTBM serves as the foundation for many tensor clustering algorithms [12, 4, 13]. Example 2 (Stochastic tensor block model) Let Y be a binary-valued tensor. The stochastic tensor block model (STBM) yi1,...,i K i.i.d. Bernoulli(cr1,...,r K) is a special case of model (1), with the subgaussianity parameter σ equal to 1 4. The STBM can be viewed as an extension, to higher-order tensors, of the popular stochastic block model [16, 17] for matrix-based network analysis. In the filed of community detection, multi-layer stochastic model has also been developed for multi-relational network data analysis [18, 19]. More generally, our model also applied to hybrid error distributions, in which different types of distribution are allowed for different portions of the tensor. This scenario may happen, for example, when the data tensor Y represents concatenated measurements from multiple data sources. Before we discuss the estimation, we present the identifiability of the TBM. Assumption 1 (Irreducible core) The core tensor C is called irreducible if it cannot be written as a block tensor with the number of mode-k clusters smaller than Rk, for any k [K]. In the matrix case (K = 2), the irreducibility is equivalent to saying that C has no two identical rows and no two identical columns. In the higher-order case, the assumption requires that none of order- (K-1) slices of C are identical. Note that irreducibility is a weaker assumption than full-rankness. Proposition 1 (Identifiability) Consider a Gaussian or Bernoulli TBM (1). Under Assumption 1, the factor matrices Mk s are identifiable up to permutations of cluster labels. The identifiability property for the TBM outperforms that for the classical factor model [20, 21]. In the Tucker [22, 14] and many other factor analyses [20, 21], the factors are identifiable only up to orthogonal rotations. Those models recover only the (column) space spanned by Mk, but not the individual factors. In contrast, our model does not suffer from rotational invariance, and as we show in Section 4, every individual factor is consistently estimated in high dimensions. This brings a benefit to the interpretation of factors in the tensor block model. We propose a least-square approach for estimating the TBM. Let Θ = C 1 M1 2 K MK denote the mean signal tensor with block structure. The mean tensor is assumed to belong to the following parameter space PR1,...,RK = Θ Rd1 d K : Θ = C 1 M1 2 K MK, with some membership matrices Mk s and a core tensor C RR1 RK . In the following theoretical analysis, we assume the clustering size R = (R1, . . . , RK) is known and simply write P for short. The adaptation of unknown R will be addressed in Section 5.2. The least-square estimator for the TBM (1) is ˆΘ = arg min Θ P 2 Y, Θ + Θ 2 F . (4) The objective is equal (ignoring constants) to the sum of squares Y Θ 2 F and hence the name of our estimator. 4 Statistical convergence In this section, we establish the convergence rate of the least-squares estimator (4) for two measurements. The first measurement is mean squared error (MSE): MSE(Θtrue, ˆΘ) = 1 Q k dk Θtrue ˆΘ 2 F , where Θtrue, ˆΘ P are the true and estimated mean tensors, respectively. While the loss function corresponds to the likelihood for the Gaussian tensor model, the same assertion does not hold for other types of distribution such as stochastic tensor block model. We will show that, with very high probability, a simple least-square estimator achieves a fast convergence rate in a general class of block tensor models. Theorem 1 (Convergence rate of MSE) Let ˆΘ be the least-square estimator of Θtrue under model (1). There exists two constants C1, C2 > 0, such that, MSE(Θtrue, ˆΘ) C1σ2 k dk log Rk holds with probability at least 1 exp( C2(Q k dk log Rk)) uniformly over Θtrue P. The convergence rate of MSE in (5) consists of two parts. The first part Q k Rk is the number of parameters in the core tensor C, while the second part P k dk log Rk reflects the the complexity for estimating Mk s. It is the price that one has to pay for not knowing the locations of the blocks. We compare our bound with existing literature. The Tucker tensor decomposition has a minimax convergence rate proportional to P k dk R k [22], where R k is the multilinear rank in the mode k. Applying Tucker decomposition to the TBM yields P k dk Rk, because the mode-k rank is bounded by the number of mode-k clusters. Now, as both the dimension dmin = mink dk and clustering size Rmin = mink Rk tend to infinity, we have Q k Rk + P k dk log Rk P k dk Rk. Therefore, by fully exploiting the block structure, we obtain a better convergence rate than previously possible. Recently, [13] proposed a convex relaxation for estimating the TBM. In the special case when the tensor dimensions are equal at every mode d1 = . . . = d K = d, their estimator has a convergence rate of order O(d 1) for all K 2. As we see from (5), our estimate obtains a much better convergence rate O(d (K 1)), which is especially favorable as the order increases. The bound (5) generalizes the previous results on structured matrix estimation in network analysis [23, 16]. Earlier work [16] suggests the following heuristics on the sample complexity for the matrix case: (number of parameters) + log (complexity of models) number of samples . (6) Our result supports this important principle for general K 2. Note that, in the TBM, the sample size is the total number of entries Q k dk, the number of parameters is Q k Rk, and the combinatoric complexity for estimating block structure is of order Q Next we study the consistency of partition. To define the misclassification rate (MCR), we need to introduce some additional notation. Let Mk = Jm(k) i,r K, ˆ Mk = J ˆm(k) i,r K be two mode- k membership matrices, and D(k) = JD(k) r,r K be the mode-k confusion matrix with element D(k) r,r = 1 dk Pdk i=1 1{m(k) i,r = ˆm(k) i,r = 1}, where r, r [Rk]. Note that the row/column sum of D(k) represents the nodes proportion in each cluster defined by Mk or ˆ Mk. We restrict ourselves to non-degenerating clusterings; that is, the row/column sums of D(k) are lower bounded by a constant τ > 0. With a little abuse of notation, we still use P = P(τ) to denote the parameter space with the non-degenerating assumption. The least-square estimator (4) should also be interpreted with this constraint imposed. We define the mode-k misclassification rate (MCR) as MCR(Mk, ˆ Mk) = max r [Rk],a =a [Rk] min n D(k) a,r, D(k) a ,r o . In other words, MCR is the element-wise maximum of the confusion matrix after removing the largest entry from each column. Under the non-degenerating assumption, MCR = 0 if and only if the confusion matrix D(k) is a permutation of a diagonal matrix; that is, the estimated partition matches with the true partition, up to permutations of cluster labels. Theorem 2 (Convergence rate of MCR) Consider a tensor block model (2) with sub-Gaussian parameter σ. Define the minimal gap between the blocks δmin = 1 C max mink δ(k), where δ(k) = minrk =r k maxr1,...,rk 1,rk+1,...,r K(cr1,...,rk,...,r K cr1,...,r k,...,r K)2. Let Mk,true be the true mode-k membership, ˆ Mk be the estimator from (4). Then, for any ε [0, 1], P(MCR( ˆ Mk, Mk,true) ε) 21+P Cε2δ2 minτ 3K 2 QK k=1 dk σ2 where C > 0 is a positive constant, and τ > 0 the lower bound of cluster proportions. The above theorem shows that our estimator consistently recovers the block structure as the dimension of the data tensor grows. The block-mean gap δmin serves the role of the eigen-separation as in the classical tensor Tucker decomposition [22]. Table 1 summarizes the comparison of various tensor methods in the special case when d1 = = d K = d and R1 = = RK = R. Method Recovery error ( ˆΘ Θtrue 2 F /σ2) Clustering error (MCR) Block detection (see Section 6) Tucker [22] d R - No Co Co [13] d K 1 - No TBM (this paper) d log R σ δminτ (3K 2)/2 d (K 1)/2 Yes Table 1: Comparison of various tensor decomposition methods. For ease of presentation, we summarize only the leading terms in dimension. 5 Numerical implementation 5.1 Alternating optimization We introduce an alternating optimization for solving (4). Estimating Θ consists of finding both the core tensor C and the membership matrices Mk s. The optimization (4) can be written as ( ˆC, { ˆ Mk}) = arg min C RR1 RK , membership matrices Mk s f(C, {Mk}), where f(C, {Mk}) = Y C 1 M1 2 . . . K MK 2 F . The decision variables consist of K + 1 blocks of variables, one for the core tensor C and K for the membership matrices Mk s. We notice that, if any K out of the K + 1 blocks of variables are known, then the last block of variables can be solved explicitly. This observation suggests that we can iteratively update one block of variables at a time while keeping others fixed. Specifically, given the collection of ˆ Mk s, the core tensor estimate ˆC = arg min C f(C, { ˆ Mk}) consists of the sample averages of each tensor block. Given the block mean ˆC and K 1 membership matrices, the last membership matrix can be solved using a simple nearest neighbor search over only Rk discrete points. The full procedure is described in Algorithm 1. Algorithm 1 can be viewed as a higher-order extension of the ordinary (one-way) k-means algorithm. The core tensor C serves as the role of centroids. As each iteration reduces the value of the objective function, which is bounded below, convergence of the algorithm is guaranteed. The per-iteration Algorithm 1 Multiway clustering based on tensor block models Input: Data tensor Y Rd1 d K, clustering size R = (R1, . . . , RK). Output: Block mean tensor ˆC RR1 RK, and the membership matrices ˆ Mk s. 1: Initialize the marginal clustering by performing independent k-means on each of the K modes. 2: repeat 3: Update the core tensor ˆC = Jˆcr1,...,r KK. Specifically, for each (r1, . . . , r K) [R1] [RK], ˆcr1,...,r K = 1 nr1,...,r K ˆ M 1 1 (r1) ˆ M 1 K (r K) yi1,...,i K, (7) where M 1 k (rk) denotes the indices that belong to the rkth cluster in the mode k, and nr1,...,r K = Q k | ˆ M 1 k (rk)| denotes the number of entries in the block indexed by (r1, . . . , r K). 4: for k in {1, 2, ..., K} do 5: Update the mode-k membership matrix ˆ Mk. Specifically, for each a [dk], assign the cluster label ˆ Mk(a) [Rk]: ˆ Mk(a) = arg min r [Rk] ˆc ˆ M1(i1),...,r,..., ˆ MK(i K) yi1,...,a,...,i K 2 , where I k = (i1, . . . , ik 1, ik+1, . . . , i K) denotes the tensor coordinates except the k-th mode. 6: end for 7: until Convergence computational cost scales linearly with the sample size, d = Q k dk, and this complexity matches the classical tensor methods [24, 25, 22]. We recognize that obtaining the global optimizer for such a non-convex optimization is typically difficult [26, 1]. Following the common practice in non-convex optimization [1], we run the algorithm multiple times, using random initializations with independent one-way k-means on each of the modes. 5.2 Tuning parameter selection Algorithm 1 takes the number of clusters R as an input. In practice such information is often unknown and R needs to be estimated from the data Y. We propose to select this tuning parameter using Bayesian information criterion (BIC), BIC(R) = log Y ˆΘ 2 F + P k dk pe, (8) where pe is the effective number of parameters in the model. In our case we take pe = Q k dk log Rk, which is inspired from (6). We choose ˆR that minimizes BIC(R) via grid search. Our choice of BIC aims to balance between the goodness-of-fit for the data and the degree of freedom in the population model. We test its empirical performance in Section 7. 6 Extension to sparse estimation In some large-scale applications, not every block in a data tensor is of equal importance. For example, in the genome-wide expression data analysis, only a few entries represent the signals while the majority come from the background noise (see Figure 1b). While our estimator (4) is still able to handle this scenario by assigning small values to some of the ˆcr1,...,r K s, the estimates may suffer from high variance. It is thus beneficial to introduce regularized estimation for better bias-variance trade-off and improved interpretability. Here we illustrate a sparse version of TBM by imposing regularity on block means for localizing important blocks in the data tensor. This problem can be formulated as a variable selection on the block parameters. We propose the following regularized least-square estimation: ˆΘsparse = arg min Θ P Y Θ 2 F + λ C ρ , where C RR1 RK is the block-mean tensor, C ρ is the penalty function with ρ being an index for the tensor norm, and λ is the penalty tuning parameter. Some widely used penalties include Lasso penalty (ρ = 1), sparse subset penalty (ρ = 0), ridge penalty (ρ = Frobenius norm), elastic net (linear combination of ρ = 1 and ρ = Frobenius norm), among many others. For parsimony purpose, we only discuss the Lasso and sparse subset penalties; other penalizations can be derived similarly. Sparse estimation incurs slight changes to Algorithm 1. When updating the core tensor C in (7), we fit a penalized least square problem with respect to C. The closed form for the entry-wise sparse estimate ˆcsparse r1,...,r K is (see Lemma 2 in the Supplements): ˆcsparse r1,...,r K = ˆcols r1,...,r K1 n |ˆcols r1,...,r K| q λ nr1,...,r K o if ρ = 0, sign(ˆcols r1,...,r K) |ˆcols r1,...,r K| λ 2nr1,...,r K + if ρ = 1, where a+ = max(a, 0) and ˆcols r1,...,r K denotes the ordinary least-square estimate in (7). The choice of penalty ρ often depends on the study goals and interpretations in specific applications. Given a penalty function, we select the tuning parameter λ via BIC (8), where we modify pe into psparse e = ˆCsparse 0 + P k dk log Rk. Here 0 denotes the number of non-zero entries in the tensor. The empirical performance of this proposal will be evaluated in Section 7. 7 Experiments In this section, we evaluate the empirical performance of our TBM method1. We consider both non-sparse and sparse tensors, and compare the recovery accuracy with other tensor-based methods. Unless otherwise stated, we generate Gaussian tensors under the block model (1). The block means are generated from i.i.d. Uniform[-3,3]. The entries in the noise tensor E are generated from i.i.d. N(0, σ2). In each simulation study, we report the summary statistics across nsim = 50 replications. 7.1 Finite-sample performance In the first experiment, we assess the empirical relationship between the root mean squared error (RMSE) and the dimension. We set σ = 3 and consider tensors of order 3 and order 4 (see Figure 2). In the case of order-3 tensors, we increase d1 from 20 to 70, and for each choice of d1, we set the other two dimensions (d2, d3) such that d1 log R1 d2 log R2 d3 log R3. Recall that our theoretical analysis suggests a convergence rate O( p log R1/d2d3) for our estimator. Figure 2a plots the recovery error versus the rescaled sample size N1 = p d2d3/ log R1. We find that the RMSE decreases roughly at the rate of 1/N1. This is consistent with our theoretical result. It is observed that tensors with a higher number of blocks tend to yield higher recovery errors, as reflected by the upward shift of the curves as R increases. Indeed, a higher R means a higher intrinsic dimension of the problem, thus increasing the difficulty of the estimation. Similar behavior can be observed in the order-4 case from Figure 2b, where the rescaled sample size is N2 = p d2d3d4/ log R1. Figure 2: Estimation error for block tensors with Gaussian noise. Each curve corresponds to a fixed clustering size R. (a) Average RMSE against rescaled sample size N1 = p d2d3/ log R1 for order-3 tensors. (b) Average RMSE against rescaled sample size N2 = p d2d3d4/ log R1 for order-4 tensors. In the second experiment, we evaluate the selection performance of our BIC criterion (8). Supplementary Table S1 reports the selected numbers of clusters under various combinations of dimension d, 1Our software is available at https://cran.r-project.org/web/packages/tensorsparse. clustering size R, and noise σ. We find that, for the case d = (40, 40, 40) and R = (4, 4, 4), the BIC selection is accurate in the low-to-moderate noise setting. In the high-noise setting with σ = 12, the selected number of clusters is slightly smaller than the true number, but the accuracy increases when either the dimension increases to d = (40, 40, 80) or the clustering size reduces to R = (2, 3, 4). Within a tensor, the selection seems to be easier for shorter modes with smaller number of clusters. This phenomenon is to be expected, since shorter mode has more effective samples for clustering. 7.2 Comparison with alternative methods Next, we compare our TBM method with two popular low-rank tensor estimation methods: (i) CP decomposition and (ii) Tucker decomposition. Following the literature [13, 5, 9], we perform the clustering by applying the k-means to the resulting factors along each of the modes. We refer to such techniques as CP+k-means and Tucker+k-means. We generate noisy block tensors with five clusters on each of the modes, and then assess both the estimation and clustering performance for each method. Note that TBM takes a single shot to perform estimation and clustering simultaneously, whereas CP and Tucker-based methods separate these two tasks in two steps. We use the RMSE to assess the estimation accuracy and use the clustering error rate (CER) to measure the clustering accuracy. The CER is calculated using the disagreements (i.e., one minus rand index) between the true and estimated block partitions in the three-way tensor. For fair comparison, we provide all methods the true number of clusters. Figure 3a shows that TBM achieves the lowest estimation error among the three methods. The gain in accuracy is more pronounced as the noise grows. Neither CP nor Tucker recovers the signal tensor, although Tucker appears to result in a modest clustering performance (Figure 3b). One possible explanation is that the Tucker model imposes orthogonality to the factors, which make the subsequent k-means clustering easier than that for the CP factors. Figure 3b-c shows that the clustering error increases with noise but decreases with dimension. This agrees with our expectation, as in tensor data analysis, a larger dimension implies a larger sample size. Figure 3: Performance comparison in terms of RMSE and CER. (a) Estimation error against noise for tensors of dimension (40, 40, 40). (b) Clustering error against noise for tensors of dimension (40, 40, 40). (c) Clustering error against noise for tensors of dimension (40, 50, 60). Sparse case. We then evaluate the performance when the signal tensor is sparse. The simulated model is the same as before, except that we generate block means from a mixture of zero mass and Uniform[-3,3], with probability p (sparsity rate) and 1 p respectively. We generate noisy tensors of dimension d = (40, 40, 40) with varying levels of sparsity and noise. We utilize ℓ0-penalized TBM and primarily focus on the selection accuracy. The performance is quantified via the the sparsity error rate, which is the proportion of entries that were incorrectly set to zero or incorrectly set to non-zero. We also report the proportion of true zero s that were correctly identified (correct zeros). Table 2 reports the BIC-selected λ averaged across 50 simulations. We see a substantial benefit obtained by penalization. The proposed λ is able to guide the algorithm to correctly identify zero s, while maintaining good accuracy in identifying non-zero s. The resulting sparsity level is close to the ground truth. Supplementary Figure S1 shows the estimation error and sparsity error against σ when p = 0.8. Again, the sparse TBM outperforms the other methods. Sparsity (p) Noise (σ) BIC-selected λ Estimated Sparsity Rate Correct Zero Rate Sparsity Error Rate 0.5 4 136.0(37.5) 0.55(0.04) 1.00(0.02) 0.06(0.03) 0.5 8 439.2(80.2) 0.58(0.06) 0.94(0.08) 0.15(0.07) 0.8 8 458.0(63.3) 0.81(0.15) 0.87(0.16) 0.21(0.13) Table 2: Sparse TBM for estimating tensors of dimension d = (40, 40, 40). The reported statistics are averaged across 50 simulations with standard deviation given in parentheses. Number in bold indicates the ground truth is within 2 standard deviations of the sample average. 7.3 Real data analysis Lastly, we apply our method on two real datasets. The first dataset is a real-valued tensor, consisting of approximate 1 million expression values from 13 brain tissues, 193 individuals, and 362 genes [4]. We subtracted the overall mean expression from the data, and applied the ℓ0-penalized TBM to identify important blocks in the resulting tensor. The top blocks exhibit a clear tissues genes specificity (Supplementary Table S2). In particular, the top over-expressed block is driven by tissues {Substantia nigra, Spinal cord} and genes {GFAP, MBP}, suggesting their elevated expression across individuals. In fact, GFAP encodes filament proteins for mature astrocytes and MBP encodes myelin sheath for oligodendrocytes, both of which play important roles in the central nervous system [27]. Our method also identifies blocks with extremely negative means (i.e. under-expressed blocks). The top under-expressed block is driven by tissues {Cerebellum, Cerebellar Hemisphere} and genes {CDH9, GPR6, RXFP1, CRH, DLX5/6, NKX2-1, SLC17A8}. The gene DLX6 encodes proteins in the forebrain development [27], whereas cerebellum tissues are located in the hindbrain brain. The opposite spatial function is consistent with the observed under-expression pattern. The second dataset we consider is the Nations data [2]. This is a 14 14 56 binary tensor consisting of 56 political relationships of 14 countries between 1950 and 1965. We note that 78.9% of the entries are zero. Again, we applied the ℓ0-penalized TBM to identify important blocks in the data. We found that the 14 countries are naturally partitioned into 5 clusters, two representing neutral countries {Brazil, Egypt, India, Israel, Netherlands} and {Burma, Indonesia, Jordan}, one eastern bloc {China, Cuba, Poland, USSA}, and two western blocs, {USA} and {UK}. The relation types are partitioned into 7 clusters, among which the exports-related activities {reltreaties, book translations, relbooktranslations, exports3, relexporsts} and NGO-related activities {relintergovorgs, relngo, intergovorgs3, ngoorgs3} are two major clusters that involve the connection between neutral and western blocs. Other top blocks are described in the Supplement. We compared the goodness-of-fit of various clustering methods on the Brain expression and Nations datasets. Because the code of Co Co method [13] is not yet available, we excluded it from our numerical comparison (See Section 4 for the theoretical comparison with Co Co). The Table 3 summarizes the proportion of variance explained by each clustering method: Dataset TBM TBM-sparse CP+k-means Tucker+k-means Co Te C [12] Brain expression 0.856 0.855 0.576 0.434 0.849 Nations 0.439 0.433 0.324 0.253 0.419 Table 3: Comparison of goodness-of-fit in the Brain expression and Nations datasets. Our method (TBM) achieves the highest variance proportion, suggesting that the entries within the same cluster are close (i.e., a good clustering). As expected, the sparse TBM results in a slightly lower proportion, because it has a lower model complexity at the cost of small bias. It is remarkable that the sparse TBM still achieves a higher goodness-of-fit than others. The improved interpretability with little loss of accuracy makes the sparse TBM appealing in applications. 8 Conclusion We have developed a statistical setting for studying the tensor block model. Under suitable assumptions, the least-square estimator achieves a convergence rate O(P k dk log Rk) which is faster than previously possible. Our TBM method applies to a broad range of data distributions and can handle both sparse and dense data tensor. We demonstrate the benefit of sparse regularity in power of detection. In specific applications, prior knowledge may suggest other regularities for parameters. For example, in the multi-layer network analysis, sometimes it may be reasonable to impose symmetry on the parameters along certain modes. In some other applications, non-negativity of parameter values may be enforced. We leave these directions for future study. Acknowledgements This research was supported by NSF grant DMS-1915978 and the University of Wisconsin-Madison, Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation. [1] Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540 552, 2013. [2] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In International Conference on Machine Learning, volume 11, pages 809 816, 2011. [3] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In International Conference on Machine Learning, pages 163 171, 2013. [4] Miaoyan Wang, Jonathan Fischer, and Yun S Song. Three-way clustering of multi-tissue multiindividual gene expression data using constrained tensor decomposition. Annals of Applied Statistics, in press, 2019. [5] Victoria Hore, Ana Viñuela, Alfonso Buil, Julian Knight, Mark I Mc Carthy, Kerrin Small, and Jonathan Marchini. Tensor decomposition for multiple-tissue gene expression experiments. Nature genetics, 48(9):1094, 2016. [6] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, 6(1-4):164 189, 1927. [7] Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279 311, 1966. [8] Kean Ming Tan and Daniela M Witten. Sparse biclustering of transposable data. Journal of Computational and Graphical Statistics, 23(4):985 1008, 2014. [9] Tamara G Kolda and Jimeng Sun. Scalable tensor decompositions for multi-aspect data mining. In 2008 Eighth IEEE international conference on data mining, pages 363 372. IEEE, 2008. [10] Chang-Dong Wang, Jian-Huang Lai, and S Yu Philip. Multi-view clustering based on belief propagation. IEEE Transactions on Knowledge and Data Engineering, 28(4):1007 1021, 2015. [11] Miaoyan Wang and Lexin Li. Learning from binary multiway data: Probabilistic tensor decomposition and its statistical optimality. ar Xiv preprint ar Xiv:1811.05076, 2018. [12] Stefanie Jegelka, Suvrit Sra, and Arindam Banerjee. Approximation algorithms for tensor clustering. In International Conference on Algorithmic Learning Theory, pages 368 383. Springer, 2009. [13] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex co-clustering of tensors. ar Xiv preprint ar Xiv:1803.06518, 2018. [14] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455 500, 2009. [15] Sara C Madeira and Arlindo L Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 1(1):24 45, 2004. [16] Chao Gao and Zongming Ma. Minimax rates in network analysis: Graphon estimation, community detection and hypothesis testing. ar Xiv preprint ar Xiv:1811.06055, 2018. [17] Peter J Bickel and Aiyou Chen. A nonparametric view of network models and newman girvan and other modularities. Proceedings of the National Academy of Sciences, 106(50):21068 21073, 2009. [18] Kehui Chen Jing Lei and Brian Lynch. Consistent community detection in multi-layer network data. Biometrika, to appear, 2019. [19] Subhadeep Paul, Yuguo Chen, et al. Consistent community detection in multi-relational data through restricted multi-layer stochastic blockmodel. Electronic Journal of Statistics, 10(2):3807 3870, 2016. [20] Robin A Darton. Rotation in factor analysis. Journal of the Royal Statistical Society: Series D (The Statistician), 29(3):167 194, 1980. [21] Hervé Abdi. Factor rotations in factor analyses. Encyclopedia for Research Methods for the Social Sciences, Sage: Thousand Oaks, pages 792 795, 2003. [22] Anru Zhang and Dong Xia. Tensor SVD: Statistical and computational limits. IEEE Transactions on Information Theory, 2018. [23] Chao Gao, Yu Lu, Zongming Ma, and Harrison H Zhou. Optimal estimation and completion of matrices with biclustering structures. The Journal of Machine Learning Research, 17(1):5602 5630, 2016. [24] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. The Journal of Machine Learning Research, 15(1):2773 2832, 2014. [25] Miaoyan Wang and Yun Song. Tensor decompositions via two-mode higher-order SVD (HOSVD). In Artificial Intelligence and Statistics, pages 614 622, 2017. [26] Daniel Aloise, Amit Deshpande, Pierre Hansen, and Preyas Popat. NP-hardness of Euclidean sum-of-squares clustering. Machine learning, 75(2):245 248, 2009. [27] Nuala A O Leary, Mathew W Wright, J Rodney Brister, Stacy Ciufo, Diana Haddad, Rich Mc Veigh, Bhanu Rajput, Barbara Robbertse, Brian Smith-White, Danso Ako-Adjei, et al. Reference sequence (refseq) database at ncbi: current status, taxonomic expansion, and functional annotation. Nucleic acids research, 44(D1):D733 D745, 2015.