# sparse_tensor_additive_regression__d6724f41.pdf Journal of Machine Learning Research 22 (2021) 1-43 Submitted 9/19; Revised 1/21; Published 1/21 Sparse Tensor Additive Regression Botao Hao haobotao000@gmail.com Deepmind 5 New Street, London, UK Boxiang Wang boxiang-wang@uiowa.edu Department of Statistics and Actuarial Science The University of Iowa Iowa City, IA 52242, USA Pengyuan Wang pengyuan@uga.edu Department of Marketing University of Georgia Athens, GA 30602, USA Jingfei Zhang ezhang@bus.miami.edu Department of Management Science University of Miami Coral Gables, FL 33146, USA Jian Yang jianyang@verizonmedia.com Yahoo Research Verizon Media Sunnyvale, CA 94089, USA Will Wei Sun sun244@purdue.edu Krannert School of Management Purdue University West Lafayette, IN 47907, USA Editor: Francis Bach Tensors are becoming prevalent in modern applications such as medical imaging and digital marketing. In this paper, we propose a sparse tensor additive regression (STAR) that models a scalar response as a flexible nonparametric function of tensor covariates. The proposed model effectively exploits the sparse and low-rank structures in the tensor additive regression. We formulate the parameter estimation as a non-convex optimization problem, and propose an efficient penalized alternating minimization algorithm. We establish a non-asymptotic error bound for the estimator obtained from each iteration of the proposed algorithm, which reveals an interplay between the optimization error and the statistical rate of convergence. We demonstrate the efficacy of STAR through extensive comparative simulation studies, and an application to the click-through-rate prediction in online advertising. Keywords: additive models; low-rank tensor; non-asymptotic analysis; non-convex optimization; tensor regression. c 2021 Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v22/19-769.html. Hao, Wang, Wang, Zhang, Yang and Sun 1. Introduction Tensor data have recently become popular in a wide range of applications such as medical imaging (Zhou et al., 2013; Li and Zhang, 2017; Sun and Li, 2017), digital marketing (Zhe et al., 2016; Sun et al., 2017), video processing (Guo et al., 2012), and social network analysis (Park and Chu, 2009; Hoff, 2015), among many others. In such applications, a fundamental statistical tool is tensor regression, a modern high-dimensional regression method that relates a scalar response to tensor covariates. For example, in neuroimaging analysis, an important objective is to predict clinical outcomes using subjects brain imaging data. This can be formulated as a tensor regression problem by treating the clinical outcomes as the response and the brain images as the tensor covariates. Another example is in the study of how advertisement placement affect users clicking behavior in online advertising. This again can be formulated as a tensor regression problem by treating the daily overall click-through rate (CTR) as the response and the tensor that summarizes the impressions (i.e., view counts) of different advertisements on different devices (e.g., phone, computer, etc.) as the covariate. In Section 6, we consider such an online advertising application. Denote yi as a scalar response and Xi Rp1 p2... pm as an m-way tensor covariate, i = 1, 2, . . . , n. A general tensor regression model can be formulated as yi = T (Xi) + ϵi, i = 1, 2, . . . , n, where T ( ) : Rp1 p2... pm R is an unknown regression function, {ϵi}n i=1 are scalar observation noises. Many existing methods assumed a linear relationship between the response and the tensor covariates by considering T (Xi) = B, Xi for some low-rank tensor coefficient B (Zhou et al., 2013; Rabusseau and Kadri, 2016; Yu and Liu, 2016; Guhaniyogi et al., 2017; Raskutti et al., 2019). In spite of its simplicity, the linear assumption could be restrictive and difficult to satisfy in real applications. Consider the online advertising data in Section 6 as an example. Figure 1 shows the marginal relationship between the overall CTR and the impressions of an advertisement delivered on phone, tablet, and PC, respectively. It is clear that the relationship between the response (i.e., the overall CTR) and the covariate (i.e., impressions across three devices) departs notably from the linearity assumption. A few work considered more flexible tensor regressions by treating T ( ) as a nonparametric function (Suzuki et al., 2016; Kanagawa et al., 2016). In particular, Suzuki et al. (2016) proposed a general nonlinear model where the true function T ( ) is consisted of components from a reproducing kernel Hilbert space, and used an alternating minimization estimation procedure; Kanagawa et al. (2016) considered a Bayesian approach that employed a Gaussian process prior in learning the nonparametric function T ( ) on the reproducing kernel Hilbert space. One serious limitation of both work is that they assume that the tensor covariates are exact low-rank. This assumption is difficult to satisfy in practice, as most tensor covariates are not exact low-rank. When the tensor covariates are not exact low-rank, the performance of these two methods deteriorates dramatically; see Section 5.2 for more details. In addition, the Gaussian process approach is computationally very expensive, which severely limits its application in problems with high-dimensional tensor covariates. In this paper, we develop a flexible and computationally feasible tensor regression framework, which accommodates the nonlinear relationship between the response and the tensor covariate, and is highly interpretable. Specifically, for an m-way tensor covariate Sparse Tensor Additive Regression (a) impression on phone overall CTR (b) impression on tablet (c) impression on PC Figure 1. The overall click-through rate v.s. the impression of a certain advertisement that is delivered on phone (left plot), tablet (middle plot), and PC (right plot), respectively. The black solid curves are the fitted locally weighted scatter-plot smoother (LOESS) curves. Xi Rp1 ... pm, we consider a sparse tensor additive regression (STAR) model with jm=1 f j1...jm([Xi]j1...jm), (1) where [Xi]j1...jm denotes the (j1, . . . , jm)-th element of Xi, and f j1...jm( ) is a nonparametric additive component belonging to some smooth function class. Approximating the additive component f j1...jm( ) using spline series expansions, T (Xi) can be simplified to have a compact tensor representation of spline coefficients. To reduce the number of parameters and increase computational efficiency, we assume that the corresponding high-dimensional coefficient tensors have low-rank and group sparsity structures. Both low-rankness and sparsity are commonly used dimension reduction tools in recent tensor models (Li and Zhang, 2017; Sun et al., 2017; Sun and Li, 2017; Hao et al., 2018; Zhang, 2019; Zhang and Han, 2019). Besides effectively reducing computational cost, the group sparsity structure also significantly improves the model interpretability. For instance, in the online advertising example, when the daily overall CTR is regressed on the impressions of different advertisements on different devices, the group sparsity enables our STAR model to select effective advertisement and device combinations. Such a type of advertisement selection is important for managerial decision making and has been an active research area (Choi et al., 2010; Xu et al., 2016). To efficiently estimate the model, we formulate the parameter estimation as a non-convex optimization and propose a penalized alternating minimization algorithm. By fully exploiting the low-rankness and group sparsity structures as well as developing an efficient algorithm, our STAR model may run faster than the tensor linear regression in some experiments. For example, in the online advertising application, our STAR model can reduce the CTR prediction error by 50% while using 10% computational time of the linear or nonlinear tensor regression benchmark models. See Section 6 for more details. Besides methodological contributions, we also obtain some strong theoretical results for our proposed method. In particular, we first establish a general theory for penalized Hao, Wang, Wang, Zhang, Yang and Sun alternating minimization in the context of tensor additive model. To the best of our knowledge, this is the first statistical-versus-optimization guarantee for the penalized alternating minimization. Previous work mostly focus on either the EM-type update (Wang et al., 2014; Balakrishnan et al., 2017; Hao et al., 2017), or the truncation-based update (Sun et al., 2017). Those techniques are not directly applicable to our scenario; see Section 4.1 for detailed explanations. Next, we derive a non-asymptotic error bound for the estimator from each iteration, which demonstrates the improvement of the estimation error in each update. Finally, we apply this general theory to our STAR estimator with B-spline basis and the group-lasso penalty, and show that the estimation error in the (t + 1)-th iteration satisfies E(t+1) ρt+1E(0) | {z } optimization error + C1 1 ρn 2κ 1 2κ+1 log(pdn) | {z } statistical error where 0 < ρ 1/2 is a contraction parameter, κ is the smoothness parameter of the function class, p = max{p1, . . . , pm}, and dn is the number of spline series. The above error bound reveals an interesting interplay between the optimization error and the statistical error. The optimization error decays geometrically with the iteration number t, while the statistical error remains the same as t grows. When the tensor covariate is of order one (i.e., a vector covariate), our problem reduces to the vector nonparametric additive model. In that case, our statistical error matches with that from the vector nonparametric additive model in Huang et al. (2010). 1.1 Other related work The problem we consider in our work is fundamentally different from those in tensor decomposition and tensor response regression. As a result, the technical tools involved and the theoretical results are quite different. Tensor decomposition (Chi and Kolda, 2012; Anandkumar et al., 2014; Yuan and Zhang, 2016; Sun et al., 2017) is an unsupervised learning method that aims to find the best low-rank approximation of a single tensor. In comparison, our STAR model is a supervised learning method that seeks to capture the nonlinear relationship between the response and the tensor covariate. Although the low-rank structure of the tensor coefficient is also employed in our estimation, our objective and the technical tools involved are entirely different from the typical tensor decomposition problem. Additionally, one fundamental difference is that our model works with multiple tensor samples, while tensor decomposition works only with a single tensor. As a result, our error bound is a function of the sample size, which is different from that in tensor decomposition. Another line of related work considers tensor response regression, where the response is a tensor and the covariates are scalars (Zhu et al., 2009; Li and Zhang, 2017; Sun and Li, 2017). These work also utilized the low-rank and/or sparse structures of the coefficient tensors for dimension reduction. However, tensors are treated as the response in tensor response regression, whereas they are treated as a covariates in our approach. These are two very different types of models, motivated by different applications. The tensor response regression aims to study the change of the tensor (e.g., the brain image) as the covariate (e.g., disease status) varies. However, the tensor regression model focuses on understanding Sparse Tensor Additive Regression the change of a scalar outcome (e.g., the overall CTR) with the tensor covariates. As a result, technical tools used for theoretical analysis are also largely different. 1.2 Notations and structure Throughout this article, we denote scalars by lower case characters such as x, vectors by lower-case bold characters such as x, matrices by upper-case bold characters such as X and tensors by upper-case script characters such as X. Given a vector x Rp and a set of indices T {1, . . . , p}, we define x T such that x Tj = xj if j T and x Tj = 0, otherwise. For a square matrix A, we denote σmin(A) and σmax(A) as its minimum and maximum eigenvalues, respectively. For any function f on [a, b], we define its ℓ2(P) norm by f(x) 2 = q R b a f2(x)d P(x). Suppose X, Y Rp1 p2 pm are m-way tensors. We define tensor inner product X, Y = P j1,...,jm Xj1...jm Yj1...jm. The tensor Frobenius norm is defined as X F = q Pp1 j1=1 Ppm jm=1 X 2 j1...jm. The notation a b implies a C1b for some constant C1 > 0. For any two sequences {an} n=1, {bn} n=1, we write an = O(bn) if there exists some positive constant C2 and sufficiently large n such that an C2bn. We also write an bn if there exist constants C3, C4 > 0 such that C3an bn C4an for all n 1. The rest of the article is organized as follows. Section 2 introduces our sparse tensor additive regression model. Section 3 develops an efficient penalized alternating minimization algorithm for model estimation. Section 4 investigates its theoretical properties, followed by simulation studies in Section 5 and a real online advertising application in Section 6. The appendix collects all technical proofs. 2. Sparse Tensor Additive Model Given i.i.d. samples {yi, Xi}n i=1, our sparse tensor additive model assumes yi = T (Xi) + ϵi = jm=1 f j1...jm([Xi]j1...jm) + ϵi, i = 1, . . . n, (2) where f j1...jm( ) is the nonparametric additive function belonging to some smooth function class H, and {ϵi}n i=1 are i.i.d. observation noises. Our STAR model utilizes spline series expansion (Huang et al., 2010; Fan et al., 2011) to approximate each individual nonparametric additive component. Let Sn be the space of polynomial splines and {ψh(x)}dn h=1 be a normalized basis for Sn, where dn is the number of spline series and supx |ψh(x)| 1. It is known that for any fn Sn, there always exists some coefficients {β h}dn h=1 such that fn(x) = Pdn h=1 β hψh(x). In addition, under suitable smoothness assumptions (see Lemma 28), each nonparametric additive component f j1...jm( ) can be well approximated by functions in Sn. Applying the above approximation to each individual component, the regression function T (Xi) in (2) can be approximated by h=1 β j1...jmhψj1...jmh([Xi]j1...jm). (3) The expression in (3) has a compact tensor representation. Define Fh(X) Rp1 ... pm such that [Fh(X)]j1...jm = ψj1...jmh([X]j1...jm), and B h Rp1 ... pm such that [B h]j1...jm = β j1...jmh Hao, Wang, Wang, Zhang, Yang and Sun for h [dn], where [k] denotes {1, . . . , k} for an integer k 1. Consequently, we can write h=1 β j1...jmhψj1...jmh([Xi]j1...jm) = D B h, Fh(Xi) E . (4) Therefore, the parameter estimation of the nonparametric additive model (2) reduces to the estimation of unknown tensor coefficients B 1, . . . , B dn. The coefficients B 1, . . . , B dn include a total number of O(dnΠm j=1pj) free parameters, which could be much larger than the sample size n. In such ultrahigh-dimensional scenario, it is important to employ dimension reduction tools. A common tensor dimension reduction tool is the low-rank assumption (Chi and Kolda, 2012; Anandkumar et al., 2014; Yuan and Zhang, 2016; Sun et al., 2017). Similarly, we assume each coefficient tensor B 1, . . . , B dn satisfies the CP low-rank decomposition (Kolda and Bader, 2009): r=1 β 1hr β mhr, h = 1, . . . , dn, (5) where is the vector outer product, β 1hr Rp1, . . . , β mhr Rpm, and R min{p1, . . . , pm} is the CP-rank. This formulation reduces the effective number of the parameters from O(dnΠm j=1pj) to O(dn R Pm j=1 pj), and hence greatly improves computational efficiency. Under this formulation, our model can be written as r=1 β 1hr β mhr E . (6) Remark 1 Our model in (6) can be viewed as a generalization of several existing work. When ψj1...jmh( ) in (4) is an identity basis function (ψj1...jmh([X]j1...jm) = [X]j1...jm) with only one basis (dn = 1), (6) reduces to the bilinear form (Li et al., 2010; Hung and Wang, 2012) for a matrix covariate (m = 2), and the multilinear form for linear tensor regression (Zhou et al., 2013; Hoff, 2015; Yu and Liu, 2016; Rabusseau and Kadri, 2016; Sun and Li, 2017; Guhaniyogi et al., 2017; Raskutti et al., 2019) for a tensor covariate (m 3). In addition to the CP low-rank structure on the tensor coefficients, we further impose a group-type sparsity constraint on the components β khr. This group sparsity structure not only further reduces the effective parameter size, but also improves the model interpretability, as it enables the variable selection of components in the tensor covariate. Recall that in (6) we have β khr = (β khr1, . . . , β khrpk) for k [m], h [dn], r [R]. We define our group sparsity constraint as r=1 β 2 khrj = 0 o = sk pk, for k [m]. (7) where |Sk| refers to the cardinality of the set Sk. Figure 2 provides an illustration of the low-rank (5) and group-sparse (7) coefficients when the order of the tensor is m = 3. When m = 1, our model with the group sparsity constraint reduces to the vector sparse additive model (Ravikumar et al., 2009; Meier et al., 2009; Huang et al., 2010). Sparse Tensor Additive Regression Figure 2. An illustration of the low-rank and group-sparsity structures in a collection of three-way tensor coefficients (B 1, . . . , B dn). If one or more of the coefficients at the colored locations are non-zero, the cardinality of S1 increases by one. Remark 2 Consider the tensor order as 2. The model in (6) reduces to r=1 β 1hrβ 2hr E Rp1 p2. For instance, suppose Pdn h=1 PR r=1 β 2 1hr1 = 0. This implies the first row of RHS is all 0 and thus encourages variable selection in T (Xi) correspondingly. 3. Estimation In this section, we describe our approach to estimate the parameters in our STAR model via a penalized empirical risk minimization which simultaneously satisfies the low-rankness and encourages the sparsity of decomposed components. In particular, we consider min β1hr,...,βmhr 1 n r=1 β1hr βmhr, Fh(Xi) E 2 | {z } L(β1hr,...,βmhr) +P(β1hr, . . . , βmhr), (8) where L(β1hr, . . . , βmhr) is the empirical risk function, in which the low-rankness is guaranteed due to the CP decomposition, and P( ) is a penalty term that encourages sparsity. To enforce the sparsity as defined in (7), we consider the group lasso penalty (Yuan and Lin, 2006), i.e., P(β1hr, . . . , βmhr) = r=1 β2 khrj , (9) where {λkn}m k=1 are tuning parameters. It is worth mentioning that our algorithm and theoretical analysis can accommodate a general class of decomposable penalties (see Condition 12 for details), which includes lasso, ridge, fused lasso, and group lasso as special cases. For a general tensor covariate (m > 1), the optimization problem in (8) is a nonconvex optimization. This is fundamentally different from the vector sparse additive model (Ravikumar et al., 2009; Huang et al., 2010) whose optimization is convex. The non-convexity in (8) brings significant challenges in both model estimation and theoretical development. The key idea of our estimation procedure is to explore the bi-convex structure of the Hao, Wang, Wang, Zhang, Yang and Sun empirical risk function L(β1hr, . . . , βmhr) since it is convex in one argument while fixing all the other parameters. This motivates us to rewrite the empirical risk function into a bi-convex representation, which in turn facilitates the introduction of an efficient alternating minimization algorithm. Denote ϑkrj = (βk1rj, βk2rj, . . . , βkdnrj) Rdn 1, ϑkj = (ϑ k1j, . . . , ϑ k Rj) RRdn 1 for k [m], j [pk], and bk = (ϑ k1, . . . , ϑ kpk) . We also define the operator Q k [m] ak = a1 am. Remind that Fh(X) Rp1 ... pm with [Fh(X)]j1...jm = ψj1...jmh([X]j1...jm), see (4). We use [Fk h(Xi)]j to refer to the m 1 way tensor when we fix the index along the k-th way of Fh(Xi) as j, e.g., [F1 h(Xi)]j Rp2 ... pm . Define F k irj = Y u [m]\k βu1r, [Fk 1 (Xi)]j , . . . , Y u [m]\k βudnr, [Fk dn(Xi)]j , and denote F k ij = (F k i1j , . . . , F k i Rj) RRdn 1. In addition, we denote F k j = (F k 1j, . . . , F k nj) , F k = (F k 1 , . . . , F k pk), and y = (y1, . . . , yn) . Thus, when other parameters are fixed, minimizing the empirical risk function (8) with respect to bk is equivalent to minimizing L b1, , bm = 1 n y F kbk 2 2. (10) Note that the expression of (10) holds for any k [m] with proper definitions on F k and bk. Remark 3 Intuitively, ϑkj summarizes all the colored coefficients in Figure 2 and bk summarizes all the coefficients along k-th mode. By this definition, we can more clearly describe the effect of group sparsity: when ϑkj is a zero vector, the j-th variable in k-th mode is irrelevant. Based on this reformulation, we are ready to introduce the alternating minimization algorithm that solves (8) by alternatively updating b1, , bm. A desirable property of our algorithm is that updating bk given others can be solved efficiently via the group-wise coordinate descent based on the back-fitting algorithm (Ravikumar et al., 2009). The detailed algorithm is summarized in Algorithm 1. With a little abuse of notations, we redefine the penalty term P(b(t) k ) = Ppk j=1 q Pdn h=1 PR r=1(β(t) khrj)2. In our implementation, we use ridge regression to initialize Algorithm 1, and set tuning parameters λ = λkn for k = 1, . . . , m for simplicity. When solving Problem (11), we use the warm start and active set to accelerate the algorithm. The basic idea of the two tricks is illustrated as follows: for each t, we use the solution b(t 1) k as the initial value to update b(t) k , i.e., the warm start; when computing b(t) k , we may only consider an active set of k such that b(t 1) k is nonzero. Those two tricks have shown great successes in the implementations of the coordinate-descent-type algorithms such as the R package glmnet (Friedman et al., 2010). The overall computational complexity of Algorithm 1 is O(Tmkn(Rp)m), where T is the number of iterations for alternating minimization, m is the order the tensor, k is the number of iteration for group-wise coordinate descent to solve Problem (11), n is the sample size, R is the tensor rank and p is the maximum dimension of each tensor mode. Sparse Tensor Additive Regression Algorithm 1 Penalized Alternating Minimization for Solving (8) 1: Input: {yi}n i=1, {Xi}n i=1, initialization {b(0) 1 , . . . , b(0) m }, the set of penalization parameters {λ1n, . . . , λmn}, rank R, iteration t = 0, stopping error ϵ = 10 5. 2: Repeat t = t + 1 and run penalized alternating minimization. 3: For k = 1 to m b(t+1) k = argmin bk L(b(t) 1 , . . . , b(t) m ) + λkn P(b(t) k ), (11) where L is defined in (10). 4: End for. 5: Until maxk b(t+1) k b(t) k 2 ϵ , and let t = T . 6: Output: the estimate of each component, {b(T ) 1 , . . . , b(T ) m }. In this section, we first establish a general theory for the penalized alternating minimization in the context of the tensor additive model. Several sufficient conditions are proposed to guarantee the optimization error and statistical error. Then, we apply our theory to the STAR estimator with B-spline basis functions and the group-lasso penalty. To ease the presentation, we consider a three-way tensor covariate (i.e, m = 3) in our theoretical development, while its generalization to an m-way tensor is straightforward. 4.1 A general contract property To bound the optimization error and statistical error of the proposed estimator, we introduce three sufficient conditions: a Lipschitz-gradient condition, a sparse strongly convex condition, and a generic statistical error condition. For the sake of brevity, we only present conditions for the update of b1 in the main paper, and defer similar conditions for b2, b3 to Section 2 in the appendix. For each vector x Rpdn R 1, we divide it into p equal-length segments as in Figure 3. A segment is colored if it contains at least one non-zero element, and a segment is uncolored if all of its elements are zero. We let w(x) be the indices of colored segments in x and Es be the set of all (pdn R)-dimensional vectors with less than C0s colored segments, for some constant C0. Mathematically, for a vector x Rpdn R 1, denote w(x) := {j [p]| Pdn R h=1 x2 (j 1)dn R+h = 0} and Es := {x Rpdn R 1||w(x)| C0s}. Figure 3. An illustration of the group sparse vector. A segment is colored if it contains at least one non-zero element, and a segment is uncolored if all of its elements are zero. Define a sparse ball Bα,s(b ) := {b Rpdn R : b b 2 α, b Es} for a given constant radius α. Moreover, the noisy gradient function and noiseless gradient function of empirical Hao, Wang, Wang, Zhang, Yang and Sun risk function L defined in (8) of order-3 with respect to b1 can be written as 1L(b1, b2, b3) = 2 n F 1 F 1b1 y 1 e L(b1, b2, b3) = 2 n F 1 F 1b1 F 1 b 1 , where F 1 irj = β 21r β 31r, [F1 1(Xi)]j , . . . , β 2dnr β 3dnr, [F1 dn(Xi)]j . Condition 4 (Lipschitz-Gradient) For b2 Bα2,s2(b 2), b3 Bα3,s3(b 3), the noiseless gradient function 1 e L(b 1, , b 3) satisfies µ2n-Lipschitz-gradient condition, and 1 e L(b 1, b2, ) satisfies µ3n-Lipschitz-gradient condition with high probability. That is, 1 e L(b 1, b2, b 3) 1 e L(b 1, b 2, b 3), b1 b 1 µ2n b1 b 1 2 b2 b 2 2 1 e L(b 1, b2, b3) 1 e L(b 1, b2, b 3), b1 b 1 µ3n b1 b 1 2 b3 b 3 2, with probability at least 1 δ1 for any 0 < δ1 < 1. Here, µ2n, µ3n may depend on δ1. Remark 5 Condition 4 defines a variant of Lipschitz continuity for 1 e L(b 1, , b 3) and 1 e L(b 1, b2, ). Note that the gradient is always taken with respect to the first argument of L( , , ) and the Lipschitz continuity is with respect to the second or the third argument. Analogous Lipschitz-gradient conditions were also considered in Balakrishnan et al. (2017); Hao et al. (2017) for the population-level Q-function in the EM-type update. Next condition characterizes the curvature of noisy gradient function in a sparse ball. It states that when the second and the third argument are fixed, L( , , ) is strongly convex with parameter γ1n with high probability. As shown later in Section 4.2, this condition holds for a broad family of basis functions. Condition 6 (Sparse-Strong-Convexity) For any b2 Bα2,s2(b 2), b3 Bα3,s3(b 3), the loss function L( , , ) is sparse strongly convex in its first argument, namely L(b 1, b2, b3) L(b1, b2, b3) 1L b 1, b2, b3), b 1 b1 γ1n 2 b1 b 1 2 2, with probability at least 1 δ2 for any 0 < δ2 < 1. Here, γ1n > 0 is the strongly convex parameter and may depend on δ2. Next we present the definition for dual norm, which is a key measure for statistical error condition. More details on the dual norm are referred to Negahban et al. (2012). Definition 7 (Dual norm) For a given inner product , , the dual norm of P is given by P (v) := sup u Rp\{0} As a concrete example, the dual of ℓ1-norm is ℓ -norm while the dual of ℓ2-norm is itself. Suppose v is a p-dimensional vector and the index set {1, 2, . . . , p} is partitioned into NG disjoint groups, namely G = {G1, . . . , GNG}. The group norm for v is defined Sparse Tensor Additive Regression as P(v) = PNG t=1 v Gt 2. According to Definition (7), the dual of P(v) is defined as P (v) = maxt v Gt 2. For simplicity, we write P = P ( ). The generic statistical error (SE) condition guarantees that the distance between noisy gradient and noiseless gradient under P -norm is bounded. Condition 8 (Statistical-Error) For any b2 Bα2,s2(b 2), b3 Bα3,s3(b 3), we have with probability at least 1 δ3, 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P ε1. Remark 9 Here, ε1 is only a generic quantity and its explicit form will be derived for a specific loss function in Section 4.2. Next we introduce two conditions for the penalization parameter (Condition 10) and penalty (Condition 12). To illustrate Condition 10, we first introduce an quantity called support space compatibility constant to measure the intrinsic dimensionality of S1 defined in (7) with respect to penalty P. Specifically, it is defined as Φ(S1) := sup b S1\{0} which is a variant of subspace compatibility constant originally proposed by Negahban et al. (2012) and Wainwright (2014). If P(b) is chosen as a group lasso penalty, we have Φ(S ) = p |S |, where S is the index set of active groups. Similar definitions of Φ(S2), Φ(S3) can be made accordingly. Condition 10 (Penalization Parameter) We consider an iterative turning procedure where tuning parameters in (11) are allowed to change with iteration. In particular, we assume tuning parameters {λ(t) 1n, λ(t) 2n, λ(t) 3n} satisfy λ(t) 1n = 4ε1 + (µ2n b(t) 2 b 2 2 + µ3n b(t) 3 b 3 2)/Φ(S1) λ(t) 2n = 4ε2 + (µ 1n b(t) 1 b 1 2 + µ 3n b(t) 3 b 3 2)/Φ(S2) λ(t) 3n = 4ε3 + (µ 1n b(t) 1 b 1 2 + µ 2n b(t) 2 b 2 2)/Φ(S3), where {µ2n, µ3n}, {µ 1n, µ 3n}, {µ 1n, µ 2n} are Lipschitz-gradient parameter which are defined in Condition 4 and Conditions 29-30. Remark 11 Condition 10 considers an iterative sequence of regularization parameters. Given reasonable initializations for b(t) 1 , b(t) 2 , b(t) 3 , their estimation errors gradually decay when the iteration t increases, which implies that λ(t) kn is a decreasing sequence. After sufficiently many iterations, the rate of the λ(t) kn will be bounded by the statistical error εk, for k = 1, 2, 3. This agrees with the theory of high-dimensional regularized M-estimator in that suitable tuning parameter should be proportional to the target estimation error (Wainwright, 2014). Such iterative turning procedure plays a critical role in controlling statistical and optimization error, and has been commonly used in other high-dimensional non-convex optimization problems (Wang et al., 2014; Yi and Caramanis, 2015). Hao, Wang, Wang, Zhang, Yang and Sun Finally, we present a general condition on the penalty term. Condition 12 (Decomposable Penalty) Given a space S, a norm-based penalty P is assumed to be decomposable with respect to S such that it satisfies P(u + v) = P(u) + P(v) for any u S and v S , where S is the complement pf S. As shown in Negahban and Wainwright (2011), a broad class of penalties satisfies the decomposable property, such as lasso, ridge, fused lasso, group lasso penalties. Next theorem quantifies the error of one-step update for the estimator coming Algorithm 1. Theorem 13 (Contraction Property) Suppose Conditions 4,6,8,10, 29-34 hold. Assume the update at t-th iteration of Algorithm 1, b(t) 1 , b(t) 2 , b(t) 3 fall into sparse balls Bα1,s1(b 1), Bα2,s2(b 2), Bα3,s3(b 3) respectively, where α1, α2, α3 are some constants. Define E(t) = b(t) 1 b 1 2 2 + b(t) 2 b 2 2 2 + b(t) 3 b 3 2 2. There exists absolute constant C0 > 1 such that, the estimation error of the update at the t + 1-th iteration satisfies, E(t+1) ρE(t) + C0 ε2 1Φ(S1)2 γ2 1n + ε2 2Φ(S2)2 γ2 2n + ε2 3Φ(S3)2 with probability at least 1 3(δ1 + δ2 + δ3). Here, ρ is the contraction parameter defined as ρ = C1 max{µ 2 1n, µ 2 1n, µ2 2n, µ 2 2n, µ2 3n, µ 2 3n}/ min{γ2 1n, γ2 2n, γ2 3n}, where C1 is some constant. Theorem 13 demonstrates the mechanism of how the estimation error improves in the one-step update. When the the contraction parameter ρ is strictly less than 1 (we will prove that it holds for certain class of basis functions and penalties in next section), the first term of RHS in (14) will gradually go towards zero and the second term will be stable. The contraction parameter ρ is roughly the ratio of Lipschitz-gradient parameter and the strongly convex parameter. Similar formulas of contraction parameter frequently appears in the literature of statistical guarantees for low/high-dimensional non-convex optimization (Balakrishnan et al., 2017; Hao et al., 2017). Remark 14 Yi and Caramanis (2015) provided similar optimization and statistical guarantee for regularized EM algorithm based on mixture model. However, the source of nonconvexity in the mixture model comes from the latent variable while ours comes from the bi-convex structure in the low-rank model. Thus their analysis is not directly applicable to our case due to different verification of sparse-strong-convexity condition. 4.2 Application to STAR estimator In this section, we apply the general contract property in Theorem 13 to STAR estimator with B-spline basis functions. The formal definition of B-spline basis function is defined in Section 1 of the appendix. To ensure Conditions 4-6 and 8 are satisfied, in our STAR estimator we require conditions on the nonparametric component, the distribution of tensor covariate and the noise distribution. Sparse Tensor Additive Regression Condition 15 (Function Class) Each nonparametric component in (1) is assumed to belong to the function class H defined as follows, H = n g( ) : |g(r)(s) g(r)(t)| C|s t|α, for s, t [a, b] o , (15) where r is the order of the derivative. Let κ = r + α > 0.5 be the smoothness parameter of function class H. For j [p1], k [p2], l [p3], there is a constant cf > 0 such that minjkl f jkl(x) 2 cf and E(f jkl([X]jkl)) = 0. Each component of the covariate tensor X has an absolutely continuous distribution and its density function is bounded away from zero and infinitely on C. Condition 15 is classical for nonparametric additive model (Stone, 1985; Huang et al., 2010; Fan et al., 2011). Such condition is required to optimally estimate each individual additive component in ℓ2-norm. Condition 16 (Sub-Gaussian Noise) The noise {ϵi}n i=1 are i.i.d. randomly generated with mean 0 and bounded variance σ2. Moreover, (ϵi/σ) is sub-Gaussian distributed, i.e., there exists constant Cϵ > 0 such that (ϵi/σ) φ2 := supp 1 p 1/2(E|ϵi/σ|p)1/p Cϵ, and independent of tensor covariates {Xi}n i=1. Condition 17 (Parameter Space) We assume the absolute value of maximum entry of (b 1 , b 2 , b 2 ) is upper bounded by some positive constant c , and the absolute value of minimum non-zero entry of (b 1 , b 2 , b 2 ) is lower bounded by some positive constant c . Here, c , c not depending on n, p. Moreover, we assume the CP-rank R and sparsity parameters s1, s2, s3 are bounded by some constants. Remark 18 The condition of bounded elements of tensor coefficient widely appears in tensor literature (Anandkumar et al., 2014; Sun et al., 2017). Here the bounded tensor rank condition is imposed purely for simplifying the proofs and this condition is possible to relax to allow slowly increased tensor rank (Sun and Li, 2019). The fixed sparsity assumption is also required in the vector nonparametric additive regression (Huang et al., 2010). To relax it, Meier et al. (2009) considered a diverging sparsity scenario but required a compatibility condition which was hard to verify in practice. Thus, in this paper we consider a fixed sparsity case and leave the extension of diverging sparsity as future work. Since the penalized empirical risk minimization (8) is a highly non-convex optimization, we require some conditions on the initial update in Algorithm 1. Condition 19 (Initialization) The initialization of b1, b2, b3 is assumed to fall into a sparse constant ball centered at b 1, b 2, b 3, saying b(0) 1 Bα1,s1(b 1), b(0) 2 Bα2,s1(b 2), b(0) 3 Bα3,s1(b 3), where α1, α2, α3 are some constants that are not diverging with n or p. Remark 20 Similar initialization conditions have been widely used in tensor decomposition (Sun et al., 2017; Sun and Li, 2019), tensor regression (Suzuki et al., 2016; Sun and Li, 2017), and other non-convex optimization (Wang et al., 2014; Yi and Caramanis, 2015). Once the initial values fall into the sparse ball, the contract property and group lasso ensure Hao, Wang, Wang, Zhang, Yang and Sun that the successive updates also fall into a sparse ball. Another line of work considers to design spectral methods to initialize certain simple non-convex optimization, such as matrix completion (Ma et al., 2017) and tensor sketching (Hao et al., 2018). The success of spectral methods heavily relies on a simple non-convex geometry and explicit form of high-order moment calculation, which is easy to achieve in previous work (Ma et al., 2017; Hao et al., 2018) by assuming a Gaussian error assumption. However, the design of spectral method in our tensor additive regression is substantially harder since the high-order moment calculation has no explicit form in our context. We leave the design of spectral-based initialization as future work. Finally, we state the main theory on the estimation error of our STAR estimator with B-spline basis functions and a group lasso penalty. Theorem 21 Suppose Conditions 10, 15-17, 19 hold and consider the class of normalized B-spline basis functions defined in (A1) and group-lasso penalty defined in (9). If one chooses the number of spline series dn n 1 2κ+1 and the tuning parameter λ(t) 1n, λ(t) 2n, λ(t) 3n as defined in Condition 10 with generic parameters specified in Lemmas 26-27, with probability at least 1 C0(t + 1)(Rsn 2κ 2κ+1 + 1/p), we have E(t+1) ρt+1E(0) | {z } optimization error 2κ+1 log(pdn) | {z } statistical error where 0 < ρ 1/2 is a contraction parameter, and κ is the smoothness parameter of function class H in (15). Note that C1 may involve a smaller order term that is negligible asymptotically. Consequently, when the total number of iterations is no smaller than 1 ρ C1E(0) n 2κ 1 2κ+1 / log(1/ρ), and the sample size n C2(log p) 2κ+1 2κ 1 for sufficiently large C2, we have E(T ) 2C1R2 2κ+1 log(pdn), with probability at least 1 C0(T + 1)(Rsn 2κ 2κ+1 + 1/p). The non-asymptotic estimation error bound (16) consists of two parts: an optimization error which is incurred by the non-convexity and a statistical error which is incurred by the observation noise and the spline approximation. Here, optimization error decays geometrically with the iteration number t, while the statistical error remains the same when t grows. When the tensor covariate is of order-one, i.e., a vector covariate, the overall optimization problem reduces to classical vector nonparametric additive model. In that case, we do not have the optimization error (ρt+1E(0)) any more since the whole optimization is convex, and the statistical error term matches the state-of-the-art rate in Huang et al. (2010). Sparse Tensor Additive Regression Lastly, let us define b T (X) = Pdn h=1 PR r=1 β(T ) 1hr β(T ) 2hr β(T ) 3hr , Fh(X) as a final estimator of the target function T (X). For any function f on [a, b], we define its ℓ2(P) norm by f 2 = q R b a f2(x)d P(x). The following corollary provides the final error rate for the estimation of tensor additive nonparametric function T (X). It incorporates the approximation error of nonparametric component incurred by the B-spline series expansion, and the estimation error of unknown tensor parameter incurred by noises. Corollary 22 Suppose Conditions 10, 15-17, 19 hold and the number of iterations t as well as the sample size n satisfy the requirement in Theorem 21. Assume the non-zero elements in T ( ) are the same as Pdh h=1 B h, Fh( ) . Then, the final estimator satisfies b T T 2 2 = Op n 2κ 2κ+1 log(pdn) . 5. Simulations In this section, we carry out intensive simulation studies to evaluate the performance of our STAR method, and compare it with existing competing solutions including the tensor linear regression (TLR) (Zhou et al., 2013), the Gaussian process based nonparametric method (GP) (Kanagawa et al., 2016), and the nonlinear tensor regression via alternative minimization procedure (AMP) (Suzuki et al., 2016). We find that STAR enjoys better performance both in terms of prediction accuracy and computational efficiency. Throughout our numerical studies, the natural cubic splines with B-spline basis are used in STAR with the degree fixed to be five, which amounts to having four inner knots. For both STAR and TLR, five-fold cross-validation is employed to select the best pair of the tuning parameters R and λ, where the tensor rank R is chosen from {2, 3} and λ is selected from a sequence that is uniformly distributed on the logarithm scale in an interval [10 5, 1]. For GP and AMP, as suggested by Kanagawa et al. (2016), the Gaussian kernel is used and the bandwidth is set to be 100; five-fold cross-validation is used to select λ, where λ is selected from the same range that is used for TLR and STAR. 5.1 Low-rank covariate structure The simulated data are generated based on the following model, yi = T (Xi) + σϵi, i = 1, . . . , n, where ϵi N(0, 1). For each observation, yi R is the response and Xi Rp1 p2 is the two-way tensor (matrix) covariate. We fix p2 = 8, and we vary n from {400, 600}, p1 from {20, 50, 100}, and σ from {0.1, 1}. We assume that there are 10 and 4 important features along the first and second way of X, respectively. Since both GP and AMP models require a low-rank structure on the tensor covariates, in this simulation we consider the low-rank covariate case which favors their models. For each i = 1, . . . , n, we consider Xi = x(1) i x(2) i , where the elements of x(1) i Rp1 and x(2) i Rp2 are independently and identically distributed from uniform distribution. Following the additive model that is considered in Ravikumar et al. (2009), we generate samples according Hao, Wang, Wang, Zhang, Yang and Sun Table 1. MSE of simulated data with low-rank covariate structure. (n, σ) model p1 = 20 p1 = 50 p1 = 100 (400, 0.1) STAR 0.51 (0.01) 0.53 (0.01) 0.50 (0.01) TLR 2.03 (0.17) 2.63 (0.17) 3.16 (0.48) AMP 1.02 (0.02) 1.01 (0.02) 1.01 (0.02) GP 1.02 (0.01) 1.03 (0.03) 1.02 (0.01) (600, 0.1) STAR 0.50 (0.01) 0.50 (0.01) 0.52 (0.01) TLR 2.11 (0.09) 2.81 (0.14) 3.19 (0.21) AMP 0.99 (0.02) 1.01 (0.00) 1.00 (0.02) GP 0.99 (0.02) 1.01 (0.01) 1.02 (0.01) (400, 1) STAR 1.54 (0.02) 1.59 (0.03) 1.56 (0.01) TLR 2.29 (0.17) 3.18 (0.40) 4.91 (0.56) AMP 1.98 (0.02) 2.01 (0.01) 2.06 (0.03) GP 2.05 (0.04) 2.04 (0.03) 2.07 (0.03) (600, 1) STAR 1.55 (0.01) 1.53 (0.01) 1.54 (0.01) TLR 2.89 (0.39) 3.90 (0.36) 4.69 (0.46) AMP 2.02 (0.02) 2.03 (0.03) 2.04 (0.03) GP 2.02 (0.01) 2.04 (0.04) 2.06 (0.03) * The simulation compares sparse tensor additive regression (STAR), tensor linear regression (TLR), alternative minimizing procedure (AMP), and Gaussian process (GP). The reported errors are the medians over 20 independent runs with the standard error given in parentheses. k=1 T j [x(1) i ]j T k [x(2) i ]k + σϵi, i = 1, . . . , n, where the nonlinear functions T j and T k are given by ( sin(1.5x), if j is odd, x3 + 1.5(x 0.5)2, if j is even, and T k (x) = ( φ(x, 0.5, 0.82), if k is odd, sin{exp( 0.5x)}, if k is even. Here φ( , 0.5, 0.82) is the probability density function of the normal distribution N(0.5, 0.82). Table 1 compares the mean squared error (MSE) of all four models, where MSE is assessed on an independently generated test data of 2, 000 samples. The STAR model shows the lowest MSE in all cases. As expected, the TLR model has unsatisfactory performance, as the true regression model is non-linear. The two nonparametric tensor regression models GP and AMP can capture partial nonlinear structures, however, our method still outperform theirs. Next, we investigate the computational costs of all four methods. Table 2 compares the computation time in the example with n = 400 and σ = 0.1. The results of other scenarios are similar and hence omitted. All the computation time includes the model fitting and the model tuning using five-fold cross-validation. Overall our STAR method is as fast as AMP and is much faster than GP. When the tensor dimension is small, p1 = 20, the linear model TLR is the fastest one, however, its computation cost dramatically increase when the dimension p1 increases, and is even slower than other nonparametric models when Sparse Tensor Additive Regression Table 2. The computation time of all methods in Section 5.1 with n = 400 and σ = 0.1. p1 20 50 100 STAR 353.30 227.65 391.90 TLR 156.19 738.96 1803.49 AMP 330.74 336.10 341.15 GP 1841.51 1913.27 1792.50 * All the time include five-fold cross-validation procedures to tune the parameters. The results are averaged over 20 independent runs. The experiment was conducted using a single processor Inter(R) Xeon(R) CPU E5-2600@2.60GHz. p1 = 100. On the other hand, the computation time of our model is less sensitive to the dimensionality and is even faster than TLR when p1 is large. This indicates the importance of fully exploiting the low-rankness and sparsity structures in order to improve computational efficiency. 5.2 General covariate structure The settings are similar as that in Section 5.1, except that the covariate is not low-rank. Here, the elements of Xi Rp1 p2 are independently and identically distributed from uniform distribution. In particular, we generate n observations according to k=1 T jk([Xi]jk) + σϵi, i = 1, . . . , n, sin(1.5x), if j is odd and k is odd, x3 + 1.5(x 0.5)2, if j is even and k is odd, φ(x, 0.5, 0.82), if j is odd and k is even, sin{exp( 0.5x)}, if j is even and k is even. To run GP and AMP models, we perform singular value decomposition in order to meet the requirements of the low-rank tensor inputs. The comparisons of the MSE are summarized in Table 3: the MSE of the STAR model is much lower than that of all the other three models. Similar to the experiment in Section 5.1, the large MSE of TLR is attributed to the incapability of capturing the nonlinear relationship in the additive model. In this example, the GP and AMP models deliver relatively large MSE because the assumption of low-rank covariate structure is violated. Importantly, the MSE of our STAR model decreases, as the sample size increases or the noise level σ decreases. These observations align with our theoretical finding in Theorem 21. To test the adaptability of STAR, we further consider a linear data-generating model: k=1 T jk([Xi]jk) + σϵi, i = 1, . . . , n, Hao, Wang, Wang, Zhang, Yang and Sun Table 3. MSE of simulated data with general covariate structure. (n, σ) model p1 = 20 p1 = 50 p1 = 100 (400, 0.1) STAR 2.30 (0.06) 3.34 (0.22) 5.04 (0.33) TLR 40.62 (0.53) 53.94 (0.71) 92.31 (1.60) AMP 41.09 (0.31) 40.78 (0.35) 40.97 (0.30) GP 41.62 (0.24) 41.35 (0.38) 41.18 (0.22) (600, 0.1) STAR 1.76 (0.04) 2.14 (0.11) 2.53 (0.12) TLR 37.12 (0.63) 43.74 (0.85) 58.75 (0.88) AMP 40.39 (0.30) 40.35 (0.35) 41.00 (0.37) GP 40.65 (0.66) 40.57 (0.64) 41.28 (0.44) (400, 1) STAR 3.98 (0.11) 5.28 (0.26) 7.17 (0.50) TLR 42.12 (0.51) 56.47 (0.78) 94.09 (2.38) AMP 42.00 (0.38) 42.40 (0.47) 41.51 (0.28) GP 42.56 (0.36) 42.19 (0.37) 42.08 (0.30) (600, 1) STAR 3.13 (0.05) 3.63 (0.13) 4.10 (0.11) TLR 38.06 (0.59) 45.43 (0.65) 60.20 (1.03) AMP 41.44 (0.54) 41.99 (0.45) 41.76 (0.40) GP 41.31 (0.50) 41.87 (0.44) 42.49 (0.53) * The simulation compares sparse tensor additive regression (STAR), tensor linear regression (TLR), alternative minimizing procedure (AMP), and Gaussian process (GP). The reported errors are the medians over 20 independent runs with the standard error given in parentheses. Table 4. MSE of simulated data from linear models. model p1 = 20 p1 = 50 p1 = 100 STAR 1.40 (0.03) 1.62 (0.13) 1.85 (0.06) TLR 1.19 (0.01) 1.52 (0.02) 2.42 (0.04) AMP 2.07 (0.02) 2.08 (0.02) 2.09 (0.03) GP 2.10 (0.01) 2.10 (0.01) 2.10 (0.01) * The simulation compares sparse tensor additive regression (STAR), tensor linear regression (TLR), alternative minimizing procedure (AMP), and Gaussian process (GP). The reported errors are the medians over 20 independent runs with the standard error given in parentheses. ( 0.5x, if j is odd, x, if j is even. We use n = 400 and σ = 1 for illustration and summarize the result in Table 4. We observe TLR delivers the lowest MSE and STAR is the second best when p1 = 20 and 50. The MSE of STAR is the lowest when p1 = 100. Thus the STAR model is competitive with TLR in general when the data generating model is truly linear. 5.3 Three-way covariate structure We next extend the previous simulations to a three-way covariate structure. We consider two cases in this section. In the first case, we generate the tensor covariate Xi Rp1 p2 2 Sparse Tensor Additive Regression Table 5. MSE of simulated data with three-way tensor covariates. p1 = 20 p1 = 50 p1 = 200 Case 1 (600, 0.1) STAR 0.34 (0.03) 1.33 (0.23) 1.10 (0.03) TLR 10.69 (0.06) 10.86 (0.14) 13.47 (0.25) (600, 1) STAR 1.45 (0.02) 1.71 (0.20) 1.99 (0.01) TLR 11.93 (0.09) 12.45 (0.14) 18.14 (0.95) (1000, 0.1) STAR 0.37 (0.02) 0.43 (0.03) 1.24 (0.25) TLR 10.82 (0.10) 10.77 (0.12) 11.68 (0.11) (1000, 1) STAR 1.40 (0.03) 1.47 (0.02) 2.02 (0.03) TLR 11.90 (0.06) 11.98 (0.15) 13.58 (0.19) Case 2 (600, 0.1) STAR 0.78 (0.01) 2.07 (1.10) 1.01 (0.00) TLR 13.11 (0.11) 13.11 (0.17) 16.62 (0.33) (600, 1) STAR 2.07 (0.08) 2.01 (0.02) 1.99 (0.02) TLR 14.45 (0.17) 14.33 (0.14) 20.73 (0.66) (1000, 0.1) STAR 0.75 (0.00) 0.78 (0.05) 1.35 (0.17) TLR 13.15 (0.22) 12.98 (0.17) 13.52 (0.23) (1000, 1) STAR 1.76 (0.02) 2.13 (0.12) 1.99 (0.02) TLR 14.15 (0.29) 14.05 (0.19) 15.57 (0.27) * The simulation compares sparse tensor additive regression (STAR) and tensor linear regression (TLR). The reported errors are the medians over 20 independent runs, and the standard error of the medians are given in parentheses. All the methods use five-fold cross-validation procedures to tune the parameters. whose elements are from i.i.d. uniform distribution, and then generate the response yi R from case 1: yi = sin(T jk1([Xi]jk1)) + log |T jk2([Xi]jk2)| + σϵi, i = 1, . . . , n, where T jkl with l = 1, 2 is defined the same as the expression (??). In the second case, we generate the response from case 2: yi = sin k=1 T jk([Xi]jkl) k=1 T jk([Xi]jkl) + σϵi, i = 1, . . . , n. It is worth noting that case 1 uses an additive model while case 2 does not. Therefore, the additive model assumption in our STAR method is actually mis-specified in case 2. Since the softwares of AMP and GP models for three-way tensor covariates are not available, in this simulation we compare our STAR model only with TLR. We vary the sample size n {600, 1000}, the first-way dimension p1 {20, 50, 200}, the noise level σ {0.1, 1}, and fix the second-way dimension p2 = 10. Similar to previous simulations, we assume that there are 10, 4, and 2 important features along the three modes of the tensor Xi, respectively. As shown in Table 5, the MSE of the STAR model is consistently lower than that of TLR, even in the case when the additive model is mis-specified. Hao, Wang, Wang, Zhang, Yang and Sun 6. An Application to Online Advertising In this section, we apply the STAR model to click-through rate (CTR) prediction in online advertising. The CTR is defined to be the ratio between the number of clicks and the number of impressions (ad views). In this study, we are interested in predicting the overall CTR, which is the average CTR across different ad campaigns. The overall CTR is an effective measure to evaluate the performance of online advertising. A low overall CTR usually indicates that the ads are not effectively displayed or the wrong audience is being targeted. As a reference, the across-industry overall CTR of display campaigns in the United States from April 2016 to April 2017 is 0.08%.1 Importantly, the CTR is also closely related to the revenue. Define the effective revenue per mile (e RPM) to be the amount of revenue from every 1000 impressions, and we have e RPM = 1000 CPC CTR, where CPC is the cost per click. From this expression, we can see that a good CTR prediction is critical to ad pricing, and the CTR prediction is a highly important task in online advertising. We collect 136 ad campaign data during 28 days from a premium Internet media company.2 The data from each day have been aggregated into six time periods and each of the 136 campaigns involves ads delivered via three devices: phone, tablet, and personal computer (PC). In total, we have 224 = 28 8 time periods. There are 153 million of users in total, and we divide all the users into two groups, a younger group and an elder group, which are partitioned by the median age. For each time period, we aggregate the number of impressions of 136 advertising campaigns that are delivered on each of three types of devices for each of the two age groups. Denoting the number of impressions by X, each data point has Xi R136 3 2, and i = 1, 2, . . . , 224 represents the time period. In this study, we aim to study the relationship between the overall CTR and the three-way tensor covariate of impressions. Figure 1 delineates the marginal relationship between the overall CTR and the impression of one advertisement that is delivered on phone, tablet, and PC, respectively. In this example, the overall CTR clearly reveals a non-linear pattern across all devices. Moreover, it is generally believed that not all ads have significant impacts on the overall CTR and hence ad selection is an active research area (Choi et al., 2010; Xu et al., 2016). To fulfill both tasks of capturing the nonlinear relationship and selecting important ads, we apply the proposed STAR model to predict the overall CTR. The logarithm transformations are applied to both the CTR and the number of impressions. We train and tune each method on the data obtained on the first 24 days, and use the remaining data as the test data to assess the prediction accuracy. The MSE of our STAR model is 0.51, which is much lower than 5.44, the MSE of the TLR. This result shows the effectiveness of capturing the non-linear relationship as well as assuming the low-rankness and group sparsity structures both in increasing the CTR prediction accuracy and the algorithm efficiency. The AMP and GP models are not compared due to the lack of implementation for three-way covariates. In terms of ad selection, the STAR model with group lasso penalty selects 60 out of 136 ads, as well as all three devices and two age groups as active variables for the CTR prediction. As a comparison, TLR selects 114 ads, 46 of which are also selected by our STAR 1. The data are from http://www.richmediagallery.com/learn/benchmarks. 2. The reported data and results in this section are deliberately incomplete and subject to anonymization, and thus do not necessarily reflect the real portfolio at any particular time. Sparse Tensor Additive Regression 1 103 104 106 107 11 110 111 113 119 12 120 123 128 13 130 132 134 14 15 16 19 2 20 21 22 23 24 26 3 35 37 4 40 42 48 57 58 59 6 60 62 63 64 65 68 7 71 72 73 74 76 77 78 79 8 89 91 93 98 Heat Map: One Impression on Ad/Device Combination for Younger Group 1 103 104 106 107 11 110 111 113 119 12 120 123 128 13 130 132 134 14 15 16 19 2 20 21 22 23 24 26 3 35 37 4 40 42 48 57 58 59 6 60 62 63 64 65 68 7 71 72 73 74 76 77 78 79 8 89 91 93 98 Heat Map: One Impression on Ad/Device Combination for Elder Group Figure 4. Heatmaps for the overall CTR. The left panel is the mean change in the overall CTR if the test data have one additional impression on each ad and device combination for the younger group and the right panel is for the elder group. Darker tiles indicate greater positive mean change in the overall CTR and lighter tiles indicate greater negative mean change. The IDs of ads have been renumbered for concerns of confidentiality. method. Besides the prediction on the overall CTR and the ad selection performance, we are also able to see which combination of ad, device, and age group yields the most significant impact on the overall CTR. In the left panel of Figure 4, each tile represents a combination of ad and device for the younger group, and the darkness of the tile implies the sensitivity of the overall CTR associated with one more impression on this combination; the right panel of Figure 4 shows the heatmap for the elder group. Displayed on phones of the elder users, the ad with ID 98 has the most positive effect on the overall CTR. Figure 5 is plotted similarly except that the change is due to every 1000 additional impressions on the certain combinations. The overall CTR has the largest growth when 1000 additional impressions are allocated to the ad with ID 73 displaying on phones of the younger users. The different patterns between Figure 4 and 5 indicate the nonlinear relationship between the overall CTR and the number of impressions. This result is important for managerial decision making. Under a specific budget, our STAR model facilitates ad placement targeting based on the best ad/device/age combination to maximize the ad revenue. In this appendix, we present the detailed proofs for Theorem 13, 21 and Corollary 22. A1 Proof of Theorem 13 First, we state a key lemma which quantifies the estimation error of each component individually within one iteration step. Hao, Wang, Wang, Zhang, Yang and Sun 1 103 104 106 107 11 110 111 113 119 12 120 123 128 13 130 132 134 14 15 16 19 2 20 21 22 23 24 26 3 35 37 4 40 42 48 57 58 59 6 60 62 63 64 65 68 7 71 72 73 74 76 77 78 79 8 89 91 93 98 Heat Map: 1000 Impressions on Ad/Device Combination for Younger Group 1 103 104 106 107 11 110 111 113 119 12 120 123 128 13 130 132 134 14 15 16 19 2 20 21 22 23 24 26 3 35 37 4 40 42 48 57 58 59 6 60 62 63 64 65 68 7 71 72 73 74 76 77 78 79 8 89 91 93 98 Heat Map: 1000 Impressions on Ad/Device Combination for Elder Group Figure 5. Heatmaps for the overall CTR. The left panel is the mean change in the overall CTR if the test data have 1000 additional impressions on each ad and device combination for the younger group and the right panel is for the elder group. Darker tiles indicate greater positive mean change in the overall CTR and lighter tiles indicate greater negative mean change. The IDs of ads have been renumbered for concerns of confidentiality. Lemma 23 Suppose Conditions 4-6 and 8 hold, and the updates at time t satisfy b(t) 2 Bα2,s2(b 2), b(t) 3 Bα3,s3(b 3). Let the penalty P fulfills the decomposable property (See Definition 7 for details), and the regularization parameter λ(t) 1n 4ε1 + (µ2n b(t) 2 b 2 2 + µ3n b(t) 3 b 3 2)/Φ(S1) where µ2n, µ3n and ε1 are defined in Condition 4 and 8 respectively. Then the update of b1 at time t + 1 satisfies b(t+1) 1 b 1 2 4λ(t) 1nΦ(S1)/γ1n, (A17) with probability at least 1 (δ1 + δ2 + δ3), where γ1n is defined in Condition 6 and Φ(S1) is the support space compatibility constant defined in (13). Proof. For notation simplicity, we will drop the superscript of b(t) 1 , b(t) 2 , b(t) 3 , λ(t) 1n and replace the superscript of b(t+1) 1 , b(t+1) 2 , b(t+1) 3 by b+ 1 , b+ 2 , b+ 3 in the rest of the proof for Lemma 23. First of all, the loss function (10) enjoys a bi-convex structure, in the sense that L(b1, b2, b3) is convex in one argument when fixing the other two. Then, given current update b2, b3, the penalized alternating minimization with respect to b1 takes the form of b+ 1 = argmin b1 L(b1, b2, b3) + λ1n P(b1). As b+ 1 minimizes the loss function, we have L(b+ 1 , b2, b3) + λ1n P(b+ 1 ) L(b 1, b2, b3) + λ1n P(b 1), (A18) Sparse Tensor Additive Regression which further implies the following inequality by the convexity of L( , b2, b3), λ1n(P(b+ 1 ) P(b 1)) L(b 1, b2, b3) L(b+ 1 , b2, b3) | 1L(b 1, b2, b3), b+ 1 b 1 | | {z } RHS Recall that 1L is the noisy gradient function with respect to b1 defined in (12). To separate the statistical error and optimization error, we utilize noiseless gradient function 1 e L(b 1, b2, b3) defined in (12) as a bridge. The detail decomposition is presented as follows, RHS | 1L(b 1, b2, b3) 1 e L(b 1, b2, b3), b+ 1 b 1 | | {z } statistical error + | 1 e L(b 1, b2, b3) 1 e L(b 1, b 2, b 3), b+ 1 b 1 | | {z } optimization error where 1 e L(b 1, b 2, b 3) = 0. Moreover, based on the decomposability of penalty P (See Condition 12), RHS 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P b+ 1 b 1 P + 1 e L(b 1, b2, b3) 1 e L(b 1, b2, b 3), b+ 1 b 1 + 1 e L(b 1, b2, b 3) 1 e L(b 1, b 2, b 3), b+ 1 b 1 , where P is the dual norm of P. We write P(b+ 1 b 1) = b+ 1 b 1 P. In addition, putting (A19) and Conditions 4 and 8 together, we have | 1L(b 1, b2, b3), b+ 1 b 1 | ε1P(b+ 1 b 1) + µ2n b2 b 2 2 + µ3n b3 b 3 2 b+ 1 b 1 2, (A20) with probability at least 1 (δ1 + δ3). Together with (A18), λ1n(P(b+ 1 ) P(b 1)) ε1P(b+ 1 b 1) + µ2n b2 b 2 2 + µ3n b3 b 3 2 b+ 1 b 1 2. Since λ1n 4ε1 + µ2n b2 b 2 2 + µ3n b3 b 3 2 /Φ(S1), we have P(b+ 1 ) P(b 1) 1 4P(b+ 1 b 1) + Φ(S1) b+ 1 b 1 2. (A21) Again, using the decomposability of P, the LHS of (A21) can be decomposed by P(b+ 1 ) P(b 1) = P(b+ 1 b 1 + b 1) P(b 1) = P((b+ 1 b 1)S 1 + b 1 + (b+ 1 b 1)S1) P(b 1) P((b+ 1 b 1)S 1 ) + P(b 1 + (b+ 1 b 1)S1) P(b 1) P((b+ 1 b 1)S 1 ) P((b+ 1 b 1)S1), (A22) where S 1 is the complement set of S1. Equipped with (A21), 3P((b+ 1 b 1)S 1 ) 5P((b+ 1 b 1)S1) + 4Φ(S1) b+ 1 b 1 2. (A23) Hao, Wang, Wang, Zhang, Yang and Sun By the definition of support space compatibility constant (13), P((b+ 1 b 1)S1) Φ(S1) (b+ 1 b 1)S1 2 Φ(S1) b+ 1 b 1 2. Together with P(b+ 1 b 1) P((b+ 1 b 1)S1) + P((b+ 1 b 1)S 1 ) and (A23), we obtain P(b+ 1 b 1) 4Φ(S1) b+ 1 b 1 2. (A24) On the other hand, based on sparse strongly convex Condition 6, L(b 1, b2, b3) L(b+ 1 , b2, b3) 1L(b 1, b2, b3), b 1 b+ 1 γ1n 2 b+ 1 b 1 2 2. with probability at least 1 δ2. Plugging in (A20), we obtain with probability at least 1 (δ1 + δ2 + δ3), 2 b+ 1 b 1 2 2 1L(b 1, b2, b3), b 1 b+ 1 + L(b+ 1 , b2, b3) L(b 1, b2, b3) ε1P(b+ 1 b 1) + µ2n b2 b 2 2 + µ3n b3 b 3 2 b+ 1 b 1 2 + λ1n(P(b 1) P(b+ 1 )). (A25) From (A22), λ1n(P(b 1) P(b+ 1 )) λ1n P((b+ 1 b 1)S1) P((b+ 1 b 1)S 1 ) λ1n P((b+ 1 b 1)S1). Together with (A24) and (A25), 2 b+ 1 b 1 2 2 λ1nΦ(S1) b+ 1 b 1 2 + 4ε1Φ(S1) b+ 1 b 1 2 + µ2n b2 b 2 2 + µ3n b3 b 3 2 b+ 1 b 1 2. Dividing by b+ 1 b 1 2 in both sides and plugging in the lower bound of λ1n, it yields that b+ 1 b 1 2 4λ1nΦ(S1) with probability at least 1 (δ1 + δ2 + δ3). This ends the proof. Note that (A17) is a generic result since we have not provided a detail form for certain parameters. Similar results also hold for the update of b(t) 2 , b(t) 3 (see next corollary) and detailed proofs are omitted here. Corollary 24 Suppose Conditions 29-34 hold, and the updates at time t satisfy b(t) 1 Bα1,s1(b 1), b(t) 2 Bα2,s2(b 2), b(t) 3 Bα3,s3(b 3). With the regularization parameters λ(t) 2n, λ(t) 3n satisfy λ(t) 2n 4ε2 + (µ 1n b(t) 1 b 1 2 + µ 3n b(t) 3 b 3 2)/Φ(S2) λ(t) 3n 4ε3 + (µ 1n b(t) 1 b 2 2 + µ 2n b(t) 2 b 3 2)/Φ(S3) Sparse Tensor Additive Regression and penalty P fulfills the decomposable property, then the updates of b2, b3 at time t + 1 satisfy b(t+1) 2 b 2 2 4λ(t) 2nΦ(S2)/γ2n b(t+1) 3 b 3 2 4λ(t) 3nΦ(S3)/γ3n, with probability at least 1 (δ1 + δ2 + δ3). Now we are ready to prove the main theorem. Applying the result in Lemma 23, and plugging in the lower bound of λ(t) 1n, we have b(t+1) 1 b 1 2 4µ2n γ1n b(t) 2 b 2 2 + 4µ3n γ1n b(t) 3 b 3 2 + 16ε1Φ(S1) Taking the square in both sides and noticing that (a + b + c)2 3(a2 + b2 + c2), b(t+1) 1 b 1 2 2 3 4µ2n 2 b(t) 2 b 2 2 2 + 3 4µ3n 2 b(t) 3 b 3 2 2 + 3 16ε1Φ(S1) with probability at least 1 (δ1 + δ2 + δ3). Similarly, applying Corollary 24, we have b(t+1) 2 b 2 2 2 3 4µ 1n γ2n 2 b(t) 1 b 1 2 2 + 3 4µ 3n γ2n 2 b(t) 3 b 3 2 2 + 3 16ε2Φ(S2) b(t+1) 3 b 3 2 2 3 4µ 1n γ3n 2 b(t) 1 b 1 2 2 + 3 4µ 2n γ3n 2 b(t) 2 b 2 2 2 + 3 16ε3Φ(S3) with probability at least 1 (δ1 + δ2 + δ3). Denote E(t+1) = b(t+1) 1 b 1 2 2 + b(t+1) 2 b 2 2 2 + b(t+1) 3 b 3 2 2. Adding the above three bounds together, it implies E(t+1) 48 hµ 2 1n γ2 2n + µ 2 1n γ2 3n i b(t) 1 b 1 2 2 + hµ2 2n γ2 1n + µ 2 2n γ2 3n i b(t) 2 b 2 2 2 + hµ2 3n γ2 1n + µ 2 3n γ2 2n i b(t) 3 b 3 2 2 +768 ε2 1Φ(S1)2 γ2 1n + ε2 2Φ(S2)2 γ2 2n + ε2 3Φ(S3)2 Define the contraction parameter ρ = 288 max{µ 2 1n, µ 2 1n, µ2 2n, µ 2 2n, µ2 3n, µ 2 3n}/ min{γ2 1n, γ2 2n, γ2 3n}, E(t+1) ρE(t) + C0 ε2 1Φ(S1)2 γ2 1n + ε2 2Φ(S2)2 γ2 2n + ε2 3Φ(S3)2 with probability at least 1 3(δ1 + δ2 + δ3). This ends the proof. Hao, Wang, Wang, Zhang, Yang and Sun A2 Proof of Theorem 21 Moreover, let α = min{α1, α2, α3}, p = max{p1, p2, p3} and s = max{s1, s2, s3}, where si is the cardinality of Si defined in (7). Our proof consists of three steps. First, we verify Conditions 4-6 and 8 in Lemma 25-27 for B-spline basis function and give explicit forms of Lipschitz-gradient parameter, sparsestrongly-convex parameter and statistical error. Second, we prove a generic contraction result by the induction argument. Last, we combine results in first two steps and achieve the final estimation rate. At first, Lemma 25 and 26 show that the loss function in (8) with B-spline basis function is sparse strongly convex and Lipschitz continuous. The proofs are deferred to Sections 3.1 and 3.2. Lemma 25 Consider {ψjklh(x)}dn h=1 introduced in (3) are normalized B-spline basis functions and suppose Conditions Conditions 15-16 and 17 hold. When b1 Bα,s(b 1), b2 Bα,s(b 2), b3 Bα,s(b 3), the loss function L( , , ) is sparse strongly convex in its first argument, namely L(b 1, b2, b3) L(b1, b2, b3) 1L b 1, b2, b3), b 1 b1 γ1n 2 b1 b 1 2 2, (A26) where γ1n = C1(1 + o(1))Rd 1 n s2c4 . Lemma 26 Suppose b2 Bα,s(b 2), b3 Bα,s(b 3) and Conditions Conditions 15-16 and 17 hold. Considering the B-spline basis function, we have with probability at least 1 12/p, T1 = 1 e L(b 1, b2, b 3) 1 e L(b 1, b 2, b 3), b1 b 1 µ2n b1 b 1 2 b2 b 2 2 T2 = 1 e L(b 1, b2, b3) 1 e L(b 1, b2, b 3), b1 b 1 µ3n b1 b 1 2 b3 b 3 2, where µ2n = µ3n = 12(s3R2/d2 n + C0 p log p/n)R2s2c 4. The verification of Conditions 31-34 and derivation of γ2n, γ3n, µ 1n, µ 3n, µ 1n, µ 2n remain the same and only differ in some constants. Thus, we let max{µ2n, µ3n, µ 1n, µ 3n, µ 1n, µ 2n} = C3(s3R2/d2 n + p log p/n)R2s2c 4 min{γ1n, γ2n, γ3n} = C4(1 + o(1))Rd 1 n s2c4 (A27) for some absolute constant C3, C4. Next lemma gives an explicit bound on statistical error for the update of b1 when we utilize B-spline basis and choose the penalty P to be group lasso penalty. Lemma 27 Suppose Conditions 15-16 and 17 hold and Consider {ψjklh(x)}dn h=1 introduced in (3) to be normalized B-spline basis function. For b2 Bα,s(b 2), b3 Bα,s(b 3), we have with probability at least 1 C0Rdns/n, 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P dκ+1/2 n + σ s4 log(pdn) for some absolute constants C0, C1, where 0 < κ < 1 describes the smoothness of function class H defined in (15). Sparse Tensor Additive Regression We complete the proof of Theorem 21 by the induction argument. When t = 1, the initialization condition naturally holds by Condition 19. Suppose b(t) 1 b 1 2 α, b(t) 2 b 2 2 α, b(t) 3 b 3 2 α holds for some t 1. For t = t + 1, first we utilize the result in Lemma 23 and plug in the lower bound of λ(t) 1n, b(t+1) 1 b 1 2 4λ(t) 1nΦ(S1) 4ε1 + µ2n b(t) 2 b 2 2 + µ3n b(t) 3 b 3 2 /Φ(S1) γ1n b(t) 2 b 2 2 + 4µ3n γ1n b(t) 3 b 3 2 µ2nα + µ3nα . As long as the statistical error ε1 satisfies ε1 1 4(µ2n + µ3n) αγ1n 40Φ(S1), (A28) we have b(t+1) 1 b 1 2 α. The proofs for b(t+1) 2 b 2 2 α and b(t+1) 3 b 3 2 α are similar when ε2, ε3 satisfy ε2 1 4(µ1n + µ3n) αγ2n 16Φ(S2), (A29) ε3 1 4(µ1n + µ2n) αγ3n 16Φ(S3). Second, when b2, b3 are fixed, the update scheme for b1 exactly fits the one in Huang et al. (2010) with group lasso penalty under B-spline basis function expansion. Define S(t) 1 = {j [p1]| β(t) 1j 2 = 0}. Similar to the proof of first part in Theorem 1 in Huang et al. (2010), we could obtain |S(t) 1 | C0|S1| = C0s for a finite constant C0 > 1 with probability converging to 1. That means the number of non-zero elements in the estimator from grouplasso-type penalization is comparable with the size of true support. The guarantee for b(t) 2 , b(t) 3 remains the same. Therefore, we can conclude that b(t) 1 Bα,s(b 1), b(t) 2 Bα,s(b 2), b(t) 3 Bα,s(b 3) hold for any iteration t = 1, 2, . . . as long as the statistical error is sufficiently small such that (A28)-(A29) hold. We choose the tuning parameter λ(t) 1n, λ(t) 2n, λ(t) 3n as defined in Condition 10 with generic parameters specified in Lemma 26, 27. Repeatedly applying the result in Theorem 13 and summing from t = 1 to t = t + 1, one can provide a generic form of error updates, E(t+1) ρt+1E(0) + 1 ρt+1 1 ρ C0 ε2 1Φ(S1)2 γ2 1n + ε2 2Φ(S2)2 γ2 2n + ε2 3Φ(S3)2 with probability at least 1 2(t + 1)(δ1 + δ2 + δ3). As before, (A30) still provides a generic form of error updates. Hao, Wang, Wang, Zhang, Yang and Sun Finally, we combine results from Lemmas 25-27. According to (A27), the contraction parameter is upper bounded by ρ 288C2 3c 8 d2n + d2 n log p When the sample size n is large enough such that 4 288C2 3c 8s6 C2 4c8 , n 1 4 288C2 3d2 n(log p)c 8 c8 C2 4 , (A31) one can guarantee ρ 1/2. For group lasso penalty (9), it has been shown in Wainwright (2014) that max{Φ(S1), Φ(S2), Φ(S3)} = s. Then, we can have an explicit form for the upper bound in (A30), E(t+1) ρt+1E(0) + 1 ρt+1 1 ρ 3C0 max(ε2 1, ε2 2, ε2 3) max(Φ(S1)2, Φ(S2)2, Φ(S3)2) min(γ2 1n, γ2 2n, γ2 3n) ρt+1E(0) + 1 ρt+1 d2κ+1 n + σ2 log(pdn) = ρt+1E(0) + 1 ρt+1 1 ρ 9C0c 8R2 c8 (1 + o(1)) d2κ 1 n + σ2 d2 n s2 log(pdn) with probability at least 1 C0(t + 1)(Rdns/n + 1/p). From Conditions 16-17, we known that s, σ, c , c are all bounded by some absolute constants. Then (A32) can be further simplified as E(t+1) ρt+1E(0) + C1R2 (1 + o(1)) 1 ρt+1 d2κ 3 n n + 1 d2κ 1 n + σ2 d2 n log(pdn) To trade-offthe statistical error part (σ2d2 n log pdn s2n ) and approximation error part ( log ep d2κ 3 n n + 1 d2κ 1 n ), one can take dn n 1 2κ+1 . Then the above bound will reduce to E(t+1) ρt+1E(0) + C1R2 (1 ρ)(1 + o(1))n 2κ 1 2κ+1 log(pdn), with proper adjustments for the constant C1. Moreover, when the total number of iterations is no smaller than T = log (1 ρ)(1 + o(1)) C1E(0) n 2κ 1 2κ+1 / log(1/ρ), we have with probability at least 1 C0(T + 1)(Rsn 2κ 2κ+2 + 1/p), E(T ) 2C1R2 (1 ρ)(1 + o(1))n 2κ 1 2κ+1 log(pdn), as long as n C2(log p) 2κ+1 2κ 1 for sufficiently large C2. This sample complexity is sufficient to guarantee that (A28)-(A29) and (A31) hold under Conditions 16-17. This ends the proof. Sparse Tensor Additive Regression A3 Proof of Corollary 22 Recall that e T (X) = Pdn h=1 B h, Fh(X) , where B h = PR r=1 β 1hr β 2hr β 3hr, and b T (X) = Pdn h=1 b Bh, Fh(X) , where b Bh = PR r=1 β(T ) 1hr β(T ) 2hr β(T ) 3hr . We make the following decomposition, b T T 2 2 2 b T e T 2 2 | {z } I1 2 | {z } I2 where f 2 = q R b a f2(x)d P(x). Intuitively, I1 quantifies the estimation error of {B h}dn h=1, while I2 measures the overall approximation error by using B-spline basis function expansion. We bound I1 and I2 in two steps. 1. By the definition, it s easy to see β(T ) 1hr β 1hr 2 2 + β(T ) 2hr β 2hr 2 2 + β(T ) 3hr β 3hr 2 2 . According to the basis property of spline expansions (De Boor et al., 1978), we reach that β(T ) 1hr β 1hr 2 2 + β(T ) 2hr β 2hr 2 2 + β(T ) 3hr β 3hr 2 2 = 3C1Rd 1 n E(T ). According to Theorem 21, I1 3C1Rd 1 n n 2κ 1 2κ+1 log pdn, (A33) with probability at least 1 C0(T + 1)(sn 2κ 2κ+1 + 1/p). 2. By the assumption of CP-low-rankness, we have h=1 B h, Fh(X) T (X) 2 l=1 (fdn jkl(Xjkl) f jkl(Xjkl)) 2 According to Lemma 28, we have I2 C2s6d 2κ n . (A34) Hao, Wang, Wang, Zhang, Yang and Sun Putting (A33) and (A34) together, we reach 2 3C1Rd 1 n n 2κ 1 2κ+1 log pdn + C2s6d 2κ n . Note that under Condition 15-19, both R and s are bounded. By taking dn n 1 2κ+1 , we have b T T 2 2 = Op n 2κ 2κ+1 log pdn . This ends the proof. Acknowledgments The authors thank the action editor Francis Bach and two reviewers for their helpful comments and suggestions which led to a much improved presentation. Will Wei Sun acknowledges support from the Office of Naval Research (ONR N00014-18-1-2759). Jingfei Zhang acknowledges support from the National Science Foundation (NSF DMS-2015190). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, or the Office of Naval Research. Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15:2773 2832, 2014. Sivaraman Balakrishnan, Martin J Wainwright, Bin Yu, et al. Statistical guarantees for the em algorithm: From population to sample-based analysis. The Annals of Statistics, 45(1): 77 120, 2017. Eric C Chi and Tamara G. Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal on Matrix Analysis and Applications, 33(4):1272 1299, 2012. Yejin Choi, Marcus Fontoura, Evgeniy Gabrilovich, Vanja Josifovski, Mauricio Mediano, and Bo Pang. Using landing pages for sponsored search ad selection. In WWW, 2010. Carl De Boor, Carl De Boor, Etats-Unis Math ematicien, Carl De Boor, and Carl De Boor. A practical guide to splines, volume 27. Springer-Verlag New York, 1978. Jianqing Fan, Yang Feng, and Rui Song. Nonparametric independence screening in sparse ultra-high-dimensional additive models. Journal of the American Statistical Association, 106(494):544 557, 2011. Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1 22, 2010. Sparse Tensor Additive Regression Rajarshi Guhaniyogi, Shaan Qamar, and David B Dunson. Bayesian tensor regression. The Journal of Machine Learning Research, 18(1):2733 2763, 2017. Weiwei Guo, Irene Kotsia, and Ioannis Patras. Tensor learning for regression. IEEE Transactions on Image Processing, 21(2):816 827, 2012. Botao Hao, Will Wei Sun, Yufeng Liu, and Guang Cheng. Simultaneous clustering and estimation of heterogeneous graphical models. The Journal of Machine Learning Research, 18(1):7981 8038, 2017. Botao Hao, Anru Zhang, and Guang Cheng. Sparse and low-rank tensor estimation via cubic sketchings. ar Xiv preprint ar Xiv:1801.09326, 2018. Peter D Hoff. Multilinear tensor regression for longitudinal relational data. The Annals of Applied Statistics, 9(3):1169, 2015. Jian Huang, Joel L Horowitz, and Fengrong Wei. Variable selection in nonparametric additive models. Annals of statistics, 38(4):2282, 2010. Hung Hung and Chen-Chien Wang. Matrix variate logistic regression model with application to eeg data. Biostatistics, 14(1):189 202, 2012. Heishiro Kanagawa, Taiji Suzuki, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami. Gaussian process nonparametric tensor estimator and its minimax optimality. In International Conference on Machine Learning, pages 1632 1641, 2016. Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM Review, 51:455 500, 2009. Bing Li, Min Kyung Kim, Naomi Altman, et al. On dimension folding of matrix-or arrayvalued statistical objects. The Annals of Statistics, 38(2):1094 1121, 2010. Lexin Li and Xin Zhang. Parsimonious tensor response regression. Journal of the American Statistical Association, 112(519):1131 1146, 2017. Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion and blind deconvolution. ar Xiv preprint ar Xiv:1711.10467, 2017. Lukas Meier, Sara Van de Geer, Peter B uhlmann, et al. High-dimensional additive modeling. The Annals of Statistics, 37(6B):3779 3821, 2009. Sahand Negahban and Martin J Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39:1069 1097, 2011. Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statist. Sci., 27(4):538 557, 11 2012. doi: 10.1214/12-STS400. Hao, Wang, Wang, Zhang, Yang and Sun Seung-Taek Park and Wei Chu. Pairwise preference regression for cold-start recommendation. In Proceedings of the third ACM conference on Recommender systems, pages 21 28. ACM, 2009. Guillaume Rabusseau and Hachem Kadri. Low-rank regression with tensor responses. In Advances in Neural Information Processing Systems, 2016. Garvesh Raskutti, Ming Yuan, Han Chen, et al. Convex regularization for high-dimensional multiresponse tensor regression. The Annals of Statistics, 47(3):1554 1584, 2019. Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman. Sparse additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71 (5):1009 1030, 2009. ISSN 1467-9868. Charles J Stone. Additive regression and other nonparametric models. The Annals of Statistics, pages 689 705, 1985. Will Wei Sun and Lexin Li. Store: sparse tensor response regression and neuroimaging analysis. The Journal of Machine Learning Research, 18(1):4908 4944, 2017. Will Wei Sun and Lexin Li. Dynamic tensor clustering. Journal of the American Statistical Association, 114(528):1708 1725, 2019. Will Wei Sun, Junwei Lu, Han Liu, and Guang Cheng. Provable sparse tensor decomposition. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):899 916, 2017. Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami. Minimax optimal alternating minimization for kernel nonparametric tensor learning. In Advances in Neural Information Processing Systems, pages 3783 3791, 2016. Martin J. Wainwright. Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1(1):233 253, 2014. doi: 10.1146/annurev-statistics-022513-115643. Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of statistics, 42(6):2164, 2014. Jian Xu, Xuhui Shao, Jianjie Ma, Kuang chih Lee, Hang Qi, and Quan Lu. Lift-based bidding in ad selection. In AAAI, 2016. Xinyang Yi and Constantine Caramanis. Regularized em algorithms: A unified framework and statistical guarantees. In Advances in Neural Information Processing Systems, pages 1567 1575, 2015. Rose Yu and Yan Liu. Learning from multiway data: Simple and efficient tensor regression. In International Conference on Machine Learning, 2016. M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68:49 67, 2006. Sparse Tensor Additive Regression Ming Yuan and Cunhui Zhang. On tensor completion via nuclear norm minimization. Foundations of Computational Mathematics, 16:1031 1068, 2016. Anru Zhang. Cross: Efficient low-rank tensor completion. Annals of Statistics, 47:936 964, 2019. Anru Zhang and Rungang Han. Optimal sparse singular value decomposition for highdimensional high-order data. Journal of the American Statistical Association, 114:1708 1725, 2019. Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang chih Lee, Zenglin Xu, Yuan Qi, and Zoubin Ghahramani. Distributed flexible nonlinear tensor factorization. In Advances in Neural Information Processing Systems, 2016. Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108:540 552, 2013. S Zhou, X Shen, and DA Wolfe. Local asymptotics for regression splines and confidence regions. The Annals of statistics, 26(5):1760 1782, 1998. Hongtu Zhu, Yasheng Chen, Joseph G. Ibrahim, Yimei Li, Colin Hall, and Weili Lin. Intrinsic regression models for positive-definite matrices with applications to diffusion tensor imaging. Journal of the American Statistical Association, 104(487):1203 1212, 2009. ISSN 0162-1459. doi: 10.1198/jasa.2009.tm08096. Hao, Wang, Wang, Zhang, Yang and Sun Supplementary Materials Sparse Tensor Additive Regression Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun In the supplementary, we present the definition and properties of the B-spline basis, some additional conditions for our theoretical results and the detailed proofs of Lemmas 25-27. 1. Properties of B-spline We formally define the q-th order B-splines with a set of m internal knot sequences k = {0 = k0 < k1 < . . . < km < km+1 = 1} recursively, ( 1, kl x < kl+1 0, otherwise and bq l (x) = x kl kl+q 1 kl bq 1 l (x) + kl+q x kl+q kl+1 bq 1 l+1 . (A1) Then under some smoothness conditions, f(x) s(x) = P l bq l (x)βl = b(x) β, where βi Rp with p = m + q. For the random variable X satisfying Condition 15, we have E[bq l (X)] C1d 1 n , E[bq l (X)]2 C2d 1 n for some constants C1 and C2. The detailed proofs refer to Stone (1985); Huang et al. (2010); Fan et al. (2011). Additionally, we restate the result in Huang et al. (2010) for the approximation error rate under B-spline basis function. Lemma 28 (Stone (1985); Huang et al. (2010)) Suppose Condition 15 holds and if the number of spline series is chosen by dn = O(n1/(2κ+1)). Then there exists an fdn jkl Sn satisfying fdn jkl f jkl 2 2 = Op(d 2κ n ) = Op(n 2κ/(2κ+1)). (A2) 2. Additional conditions for Section 4.1 In this section, we present addition conditions for Lipschitz-gradient (Conditions 29-30), sparse strongly convex (Conditions 31-32), and statistical error (Conditions 33-34) for the update of b2 and b3. We define 2L( , , ) and 3L( , , ) are the gradient taken with respect to the second and the third argument. Condition 29 () For b1 Bα1,s1(b 1), b3 Bα3,s3(b 3), the noiseless gradient function 2 e L( , b 2, b3) satisfies µ 1n-Lipschitz-gradient condition, and 2 e L(b 1, b 2, ) satisfies µ 3n Lipschitz-gradient condition. That is, 2 e L(b1, b 2, b3) 2 e L(b 1, b 2, b3), b2 b 2 µ 1n b2 b 2 2 b1 b 1 2 2 e L(b 1, b 2, b3) 2 e L(b 1, b 2, b 3), b2 b 2 µ 3n b2 b 2 2 b3 b 3 2, with probability at least 1 δ1. Sparse Tensor Additive Regression Condition 30 () For b1 Bα1,s1(b 1), b2 Bα2,s2(b 2), the noiseless gradient function 3 e L(b1, , b 3) satisfies µ 2n-Lipschitz-gradient condition, and 3 e L( , b 2, b 3) satisfies µ 1n Lipschitz-gradient condition. That is, 3 e L(b1, b2, b 3) 3 e L(b1, b 2, b 3), b3 b 3 µ 2n b3 b 3 2 b2 b 2 2 3 e L(b1, b 2, b 3) 3 e L(b 1, b 2, b 3), b3 b 3 µ 1n b3 b 3 2 b1 b 1 2, with probability at least 1 δ1. Condition 31 () For any b1 Bα1,s1(b 1), b3 Bα3,s3(b 3), the loss function L( , , ) is sparse strongly convex in its first variable, namely L(b1, b 2, b3) L(b1, b2, b3) 2L b1, b 2, b3), b 2 b2 γ2n 2 b2 b 2 2 2, with probability at least 1 δ2. Here, γ2n > 0 is the strongly convex parameter. Condition 32 () For any b1 Bα1,s1(b 1), b2 Bα2,s2(b 2), the loss function L( , , ) is sparse strongly convex in its first variable, namely L(b1, b2, b 3) L(b1, b2, b3) 3L b1, b2, b 3), b3 b3 γ3n 2 b3 b 3 2 2, with probability at least 1 δ2. Here, γ3n > 0 is the strongly convex parameter. Condition 33 () For any b1 Bα1,s1(b 1), b3 Bα3,s3(b 3), we have with probability at least 1 δ3, 2L(b1, b 2, b3) 2 e L(b1, b 2, b3) P ε2. Condition 34 () For any b1 Bα1,s1(b 1), b2 Bα2,s2(b 2), we have with probability at least 1 δ3, 3L(b1, b2, b 3) 3 e L(b1, b2, b 3) P ε3. 3. Proofs of Lemmas 25-27 In this section, we present the proof of Lemmas 25-27. If X is sub-Gaussian random variable, then its φ2-Orlicz norm can be bounded such that X φ2 C1 for some absolute constant. If X is sub-exponential random variable, then its φ1-Orlicz norm can be bounded such that X φ1 C2 for some absolute constant C2. 3.1 Proof of Lemma 25 Recall that b1 = (ϑ 11, . . . , ϑ 1p1) RRdnp1 1. Define S 1 = {j [p]| ϑ1j 2 = 0 ϑ 1j 2 = 0}, F 1 S 1 = (F 1 j Rn Rdn, j S 1), b1S 1 = (β1j RRdn 1, j S 1). Since b1 Bα,s(b 1), we know that |S 1| = C0s for some positive constant C0 1 not depending on s. Without loss of generality, assume |S 1| = {1, , C0s}. First, we do some simplifications for the left side of (A26). According to the derivation in (12), we have L(b 1, b2, b3) L(b1, b2, b3) b 1 F 1 F 1b 1 b 1 F 1 F 1b1 2y F 1b 1 + 2y F 1b1 b 1S 1F 1 S 1 F 1 S 1b 1S 1 b 1S 1F 1 S 1 F 1 S 1b1S 1 2y F 1 S 1b 1S 1 + 2y F 1 S 1b1S 1 Hao, Wang, Wang, Zhang, Yang and Sun 1L b 1, b2, b3), b 1 b1 b 1 F 1 F 1b 1 b 1 F 1 F 1b1 y F 1b 1 + y F 1b1 b 1S 1F 1 S 1 F 1 S 1b 1S 1 b 1S 1F 1 S 1 F 1 S 1b1S 1 y F 1 S 1b 1S 1 + y F 1 S 1b1S 1 Putting the above two equations together, we reach L(b 1, b2, b3) L(b1, b2, b3) 1L b 1, b2, b3), b 1 b1 = (b1S 1 b 1S 1) F 1 S 1 F 1 S 1 n (b1S 1 b 1S 1). It remains to prove (b1S1 b 1S1) F 1 S1 F 1 S1 n (b1S1 b 1S1) γ1n 2 b1S1 b 1S1 2 2. If one can show that F 1 S1 F 1 S1/n em IC0s i.e. the minimal eigenvalue σmin(F 1 S1 F 1 S1/n) em for some positive em R, then we have the strongly convex parameter γ1n = em. Let a = (a 1 , . . . , a C0s) where aj = (a j1, . . . , a j R) RRdn 1 and ajr Rdn 1. Our proof consists of two steps. Step One. Consider a single coordinate F 1 j . For k [p] and j [p], define ψjkl1([X1]jkl) ψjkldn([X1]jkl) ... ... ... ψjkl1([Xn]jkl) ψjkldn([Xn]jkl) β21rkβ31rl ... β2dnrkβ3dnrl By using the triangle inequality and Lemma 3 in Stone (1985), we have for j [C0s], Zjkl Dklrajr 2 2 F 1 j aj 2 2, (A3) where C1 is some positive constant. Divided by n in both sides, we have r=1 a jr D klr Z jkl Zjkl n Dklrajr a j F 1 j F 1 j n aj. According to Lemma 6.2 in Zhou et al. (1998), there exists certain constants C2 and C3 such that C2(1 + o(1))d 1 n σmin Z jkl Zjkl σmax Z jkl Zjkl C3(1 + o(1))d 1 n . (A4) Sparse Tensor Additive Regression holds for any k, l. Since σmin(AB) σmin(A)σmin(B), we can bound the minimum eigenvalue of the weighted B-spline design matrix, σmin D klr Z jkl Zjkl n Dklr C2(1 + o(1))d 1 n (min h β2hkβ3hl)2. This will enable us to bound the smallest eigenvalue of F 1 j F 1 j /n as follows, a j F 1 j Fj/n aj 2 2 aj C1 r=1 a jr (D klr Z jkl Zjkl Dklr)/n ajr 2 2 ajr C1C2(1 + o(1))d 1 n r=1 min h p X k=1 β 2 2hrk α2 min h p X l=1 β 2 3hrl α2 C1C2(1 + o(1))Rd 1 n (sc2 α2)2 1 4C1C2(1 + o(1))Rd 1 n s2c4 , where last inequality is due to b2 Bα,s(b 2), b3 Bα,s(b 3) for α c p s/2 and Condition 17. Let C1 = C1/4. Therefore, for every j [C0s], σmin F 1 j F 1 j n C1C2(1 + o(1))Rd 1 n s2c4 . (A5) Step Two. By the triangle inequality, j=1 F 1 j bj 2 2 F 1 S1b 2 2 = b F 1 S1 F 1 S1b, for some constant C4, which implies a F 1 S1 F 1 S1/n a 2 2 a C4 PC0s j=1 F 1 j aj 2 2 n a 2 2 Together with (A5), we have a F 1 S1 F 1 S1/n a 2 2 a C1C2C4(1 + o(1))d 1 n s2c4 holds for any a. Setting C1 = C1C2C4, it essentially implies n F 1 S1 FS1) C1(1 + o(1))Rd 1 n s2c4 , for some constant C1. We can say the sparse strong convexity holds with γ1n = C1(1 + o(1))Rd 1 n s2c4 . Hao, Wang, Wang, Zhang, Yang and Sun 3.2 Proof of Lemma 26 For notation simplicity, we denote gi(b1, b2, b3) = h=1 Fh(Xi), r=1 β1hr β2hr β3hr , where Fh(Xi) is defined in (4). According to the definition of the gradient function, we can rewrite the following inner product as 1 e L(b1, b2, b3), b+ 1 b 1 h gi(b1, b2, b3)gi(b+ 1 b 1, b2, b3) gi(b 1, b 2, b 3)gi(b+ 1 b 1, b2, b3) i . (A6) We will bound T1 first. The bound for T2 remains similar. Let s decompose T1 by three parts, T1 = 1 e L(b 1, b2, b 3) 1 e L(b 1, b 2, b 3), b1 b 1 h gi(b 1, b2, b3)gi(b+ 1 b 1, b2, b3) gi(b 1, b 2, b 3)gi(b+ 1 b 1, b2, b3) gi(b 1, b 2, b3)gi(b+ 1 b 1, b 2, b3) + gi(b 1, b 2, b 3)gi(b+ 1 b 1, b 2, b3) i h gi(b 1, b2 b 2, b3)gi(b+ 1 b 1, b2, b3) i h gi(b 1, b 2, b3)gi(b+ 1 b 1, b2 b 2, b3) i h gi(b 1, b 2, b 3)gi(b+ 1 b 1, b2 b 2, b3) i By writing explicitly of gi(b1, b2, b3), gi(b1, b2, b3) = l=1 [Fh(Xi)jkl] r=1 β1hrjβ2hrkβ3hrl l=1 ψjklh([Xi]jkl) r=1 β1hrjβ2hrkβ3hrl . Since supx |ψjklh(x)| 1, the φ2-Orlicz norm for each individual component can be bounded by ψjklh([Xi]jkl) r=1 β1hrjβ2hrkβ3hrl φ2 | r=1 β1hrjβ2hrkβ3hrl|, for j, k, l [p]. Sparse Tensor Additive Regression Based on rotation invariance, we obtain gi(b1, b2, b3) φ2 dn X r=1 β1hrjβ2hrkβ3hrl)2 1 In the following, we will bound the expectation of gi(b 1, b2 b 2, b3)gi(b+ 1 b 1, b2, b3). By the property of B-spline basis function (See Section 1) and Cathy-Schwarz inequality, E gi(b1, b2, b3) = l=1 E[Fh(Xi)jkl] r=1 β1hrjβ2hrkβ3hrl r=1 β1hrjβ2hrkβ3hrl r=1 β2 1hrjβ2 2hrkβ2 3hrl 1 Combining the above ingredients together with Hoeffding s inequality (See Lemma 35), we obtain with probability at least 1 1/p, T11 2 hs3R2 r=1 β 2 1hrj(β2hrk β 2hrk)2β2 3hrl 1 r=1 (β+ 1hrj β 1hrj)β2 2hrkβ2 3hrl 1 Noting that b2 Bα,s(b 2), b3 Bα,s(b 3), we have T11 2 hs3R2 j=1 β 2 1hrj) 1 2 max h ( k=1 β 2 2hrk) 1 2 max h ( l=1 β 2 3hrl) b2 b 2 2 b+ 1 b 1 2 i s2R2c 4 b2 b 2 2 b+ 1 b 1 2, where the last inequality is from Condition 17. The upper bounds for T12 and T13 are similar. Putting them together, with probability at least 1 6/p, |T1| 6 hs3R2 i R2s2c 4 b2 b 2 2 b+ 1 b 1 2. Similarly, we can get the bound for T2. This ends the proof. 3.3 Proof of Lemma 27 Recall that P is the dual norm of group lasso penalty P. With a little abuse of notations, we define ϵ = (ϵ1, . . . , ϵn) and T (X) = (T (X1), . . . , T (Xn)) in this section. According Hao, Wang, Wang, Zhang, Yang and Sun to the derivation of the gradient function in (12), we decompose the error by an spline approximation error term (T1) and a statistical term (T2) as follows, 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P n F 1 (F 1b 1 y) 2 n F 1 (F 1b 1 F 1 b 1) P n F 1 (y F 1 b 1) P = 2 n F 1 (T (X) F 1 b 1 + ϵ) P n F 1 (T (X) F 1 b 1) P | {z } T1 n F 1 ϵ P | {z } T2 Step One: Bounding T1. Denote A1 = {j [p]| F 1 j 2 = 0}. Since b2 Bα,s(b 2), b3 Bα,s(b 3), it s easy to see |A1| C0s for some constant C0 not depending on n, p, s. By the definition of dual norm P (See Definition 7), we obtain i=1 F 1 i T (Xi) [F 1 b 1]i P i=1 F 1 ij T (Xi) [F 1 b 1]i 2 T (Xi) [F 1 b 1]i max j A1 i=1 F 1 ij 2. (A7) Note that the first part of (A7) fully comes from the approximation error using B-spline basis functions for the nonparametric component. We bound T1 in three steps as follows. 1. To bound the first part, we use Lemma 28 which quantifies the approximation error for a single component. To ses this, there exists a positive constant C1 such that fdn jkl([Xi]jkl) f jkl([Xi]jkl) C1d κ n , j, k, l [p]. For the whole nonparametric function T , we utilize the CP-low-rankness assumption (5) and group sparse assumption (7), which indicates T (Xi) [F 1 b 1]i = max i [n] fdn jkl([Xi]jkl) f jkl([Xi]jkl) C1s3d κ n . 2. To bound the second part, by the definition of F 1 ij, we have i=1 F 1 ij = 1 i=1 β211 β311, [F1(Xi)]j.. , . . . , 1 i=1 β2dn1 β3dn1, [Fdn(Xi)]j.. i=1 β21R β31R, [F1(Xi)]j.. , . . . , 1 i=1 β2dn R β3dn R, [Fdn(Xi)]j.. , Sparse Tensor Additive Regression which implies that i=1 F 1 ij 2 = dn X i=1 β2hr β3hr, [Fh(Xi)]j.. 2 1 According to the property of B-spline basis function in Section 1, we have E β2hr β3hr, [Fh(Xi)]j.. d 1 n l=1 β2hrkβ3hrl C0s l=1 β2 2hrkβ2 3hrl 1 (A9) where the second inequality comes from Cathy-Schwarz inequality and sparsity assumption on b2, b3. On the other hand, recall that supx |ψjklh(x)| 1 for all j, k, l [p]. With the rotation invariance, the φ2-Orlicz norm of β2hr β3hr, [Fh(Xi)]j.. can be bounded by (Pp k=1 Pp l=1 β2 2hrkβ2 3hrl) 1 2 . Combining (A9) and Hoeffding-type concentration inequality (See Lemma 35), we have with probability at least 1 1/n, i=1 β2hr β3hr, [Fh(Xi)]j.. 2 C0s l=1 β2 2hrkβ2 3hrl 1 which implies i=1 F 1 ij 2 2 C0s l=1 β2 2hrkβ2 3hrl 1 with probability at least 1 Rdns/n. 3. Putting (A8)-(A11) together, we obtain T1 2C1s3d κ n C0s l=1 β2 2hrkβ2 3hrl 1 with probability at least 1 Rdns/n for some absolute constant C0, C1. Step Two: Bounding T2. Recall that F 1 ϵ = (F 1 1 ϵ, . . . , F 1 p ϵ) Rpdn 1. Then, T2 = max j A1 n F 1 j ϵ 2 = max j A1 i=1 F 1 ijϵi 2 1 n max j A1,h [dn],r [R] i=1 ϵi β2hr β3hr, [Fh(Xi)]j.. = 1 n max j A1,h [dn],r [R] i=1 ϵiψjklh(Xi)β2hrkβ2hrl. where the definition of w(x) is presented in the beginning of Section 4.1. From initial value assumption and Condition 17, we have |β2hrk β 2jrk| max h,k |β2hrk β 2hrk| β2hrk β 2hrk 2 α, Hao, Wang, Wang, Zhang, Yang and Sun and thus β2hrk β 2hrk + c . The same result holds for β3hk. Therefore, by applying Lemma 36, we have T2 s2 n(α + c )2 max j A1,h [dn],r [R] i=1 ϵiψjklh(Xi) log(pdn) n , (A13) with probability at least 1 4C0Rs/n, where σ is the noise level. Step Three: Summary. Putting the bounds (A12) and (A13) together, we obtain that with probability at least 1 C0Rdns/n, 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P h C1s3d κ n C0s l=1 β2 2hrkβ2 3hrl 1 l=1 β2 2hrkβ2 3hrl 1 log(pdn) n , where C1, C2, C3 are some positive constants. According to Condition 17 and b2 Bα,s(b 2), b3 Bα,s(b 3), l=1 β2 2hrkβ2 3hrl = k=1 β2 jhk p X l=1 β2 3hl Rd1/2 n s2c 4. By setting C1 = max{C1, C2, C2}, we have with probability at least 1 C0Rdns/n, 1L(b 1, b2, b3) 1 e L(b 1, b2, b3) P dκ+1/2 n + σ s4 log(pdn) This ends the proof. 4. Supporting Lemmas Lemma 35 (Hoeffding-type inequality) Suppose {Xi}n i=1 are i.i.d sub-Gaussian random variable with Xi φ2 K, where K is an absolute constant. For fixed a Rn, we have w.p.a 1 δ, i=1 ai Xi E( i=1 ai Xi) C0K a 2 p Sparse Tensor Additive Regression Lemma 36 (Lemma 2 in Huang et al. (2010)) Suppose that Condition 15-16 hold. Let i=1 ψjklh([Xi]jkl)ϵi, for j [p], k [p], l [p], h [dn], and Tn = maxj,k,l [p],h [dn] |Tjkl|. When dn pdn/n 0, we have for some constant C1, E(Tn) = C1 p