# learning_local_dependence_in_ordered_data__66d1e4a0.pdf Journal of Machine Learning Research 18 (2017) 1-60 Submitted 4/16; Revised 10/16; Published 4/17 Learning Local Dependence In Ordered Data Guo Yu gy63@cornell.edu Department of Statistical Science Cornell University, 1173 Comstock Hall Ithaca, NY 14853, USA Jacob Bien jbien@cornell.edu Department of Biological Statistics and Computational Biology and Department of Statistical Science Cornell University, 1178 Comstock Hall Ithaca, NY 14853, USA Editor: Saharon Rosset In many applications, data come with a natural ordering. This ordering can often induce local dependence among nearby variables. However, in complex data, the width of this dependence may vary, making simple assumptions such as a constant neighborhood size unrealistic. We propose a framework for learning this local dependence based on estimating the inverse of the Cholesky factor of the covariance matrix. Penalized maximum likelihood estimation of this matrix yields a simple regression interpretation for local dependence in which variables are predicted by their neighbors. Our proposed method involves solving a convex, penalized Gaussian likelihood problem with a hierarchical group lasso penalty. The problem decomposes into independent subproblems which can be solved efficiently in parallel using first-order methods. Our method yields a sparse, symmetric, positive definite estimator of the precision matrix, encoding a Gaussian graphical model. We derive theoretical results not found in existing methods attaining this structure. In particular, our conditions for signed support recovery and estimation consistency rates in multiple norms are as mild as those in a regression problem. Empirical results show our method performing favorably compared to existing methods. We apply our method to genomic data to flexibly model linkage disequilibrium. Our method is also applied to improve the performance of discriminant analysis in sound recording classification. Keywords: Local dependence, Gaussian graphical models, precision matrices, Cholesky factor, hierarchical group lasso 1. Introduction Estimating large inverse covariance matrices is a fundamental problem in modern multivariate statistics. Consider a random vector X = (X1, . . . , Xp)T Rp with mean zero and covariance matrix E(XXT ) = Σ. Unlike the covariance matrix, which captures marginal correlations among variables in X, the inverse covariance matrix Ω= Σ 1 (also known as the precision matrix) characterizes conditional correlations and, under a Gaussian model, Ωjk = 0 implies that Xj and Xk are conditionally independent given all other variables. When p is large, it is common to regularize the precision matrix estimator by making it sparse (see, e.g., Pourahmadi, 2013). This paper focuses on the special context in which variables have a natural ordering, such as when data are collected over time or along a c 2017 Guo Yu and Jacob Bien. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-198.html. Yu and Bien genome. In such a context, it is often reasonable to assume that random variables that are far away in the ordering are less dependent than those that are close together. For example, it is known that genetic mutations that occur close together on a chromosome are more likely to be coinherited than mutations that are located far apart. We propose a method for estimating the precision matrix based on this assumption while also allowing each random variable to have its own notion of closeness. In general settings where variables do not necessarily have a known ordering, two main types of convex methods with strong theoretical results have been developed for introducing sparsity in Ω. The first approach, known as the graphical lasso (Yuan and Lin, 2007; Banerjee et al., 2008; Friedman et al., 2008; Rothman et al., 2008), performs penalized maximum likelihood, solving minΩ 0,Ω=ΩT L (Ω) + λP (Ω), where L(Ω) = log det Ω+ n 1 Pn i=1 x T i Ωxi is, up to constants, the negative log-likelihood of a sample of n independent Gaussian random vectors and P(Ω) is the (vector) ℓ1-norm of Ω. Zhang and Zou (2014) introduce a new convex loss function called the D-trace loss and propose a positive definite precision matrix estimator by minimizing an ℓ1-penalized version of this loss. The second approach is through penalized pseudo-likelihood, the most well-known of which is called neighborhood selection (Meinshausen and B uhlmann, 2006). Estimators in this category are usually solved by a column-by-column approach and thus are more amenable to theoretical analysis (Yuan, 2010; Cai et al., 2011; Liu and Luo, 2012; Liu and Wang, 2012; Sun and Zhang, 2013; Khare et al., 2014). However they are not guaranteed to be positive definite and do not exploit the symmetry of Ω. Peng et al. (2009) propose a partial correlation matrix estimator that develops a symmetric version of neighborhood selection; however, positive definiteness is still not guaranteed. In the context of variables with a natural ordering, by contrast, almost no work uses convex optimization to flexibly estimate Ωwhile exploiting the ordering structure. Sparsity is usually induced via the Cholesky decomposition of Σ, which leads to a natural interpretation of sparsity. Consider the Cholesky decomposition Σ = QQT , which implies Ω= LT L for L = Q 1 for lower triangular matrices Q and L with positive diagonals. The assumption that X N (0, Σ) is then equivalent to a set of linear models in terms of rows of L, i.e., L11X1 = ε1 and k=1 Lrk Xk + εr r = 2, . . . , p, (1) where ε N (0, Ip). Thus, Lrk = 0 (for k < r) can be interpreted as meaning that in predicting Xr from the previous random variables, one does not need to know Xk. This observation has motivated previous work, including Pourahmadi (1999); Wu and Pourahmadi (2003); Huang et al. (2006); Shojaie and Michailidis (2010); Khare et al. (2016). While these methods assume sparsity in L, they do not require local dependence because each variable is allowed to be dependent on predecessors that are distant from it (compare the upper left to the upper right panel of Figure 10). Learning Local Dependence In Ordered Data The assumption of local dependence can be expressed as saying that each variable Xr can be best explained by exactly its Kr closest predecessors: k=r Kr Lrk Xk + εr, for Lrk = 0, r Kr k r 1, r = 2, . . . , p. (2) Note that this does not describe all patterns of a variable depending on its nearby variables. For example, Xr can be dependent on Xr 2 but not on Xr 1. In this case, the dependence is still local, but would not be captured by (2). We focus on the restricted class (2) since it greatly simplifies the interpretation of the learned dependence structure by capturing the extent of this dependence in a single number Kr, the neighborhood size. Another desirable property of model (2) is that it admits a simple connection between the sparsity pattern of L and the sparsity pattern of the precision matrix Ωin the Gaussian graphical model. In particular, straightforward algebra shows that for j < k, Lkj = = Lpj = 0 = Ωjk = 0. (3) Statistically, this says that if none of the variables Xk, . . . , Xp depends on Xj in the sense of (1), then Xj and Xk are conditionally independent given all other variables. Bickel and Levina (2008) study theoretical properties in the case that all bandwidths, Kr, are equal, in which case model (2) is a Kr-ordered antedependence model (Zimmerman and Nunez-Anton, 2009). A banded estimate of L then induces a banded estimate of Ω. The nested lasso approach of Levina et al. (2008) provides for adaptive banding , allowing Kr to vary with r (which corresponds to variable-order antedependence models in Zimmerman and Nunez-Anton, 2009); however, the nested lasso is non-convex, meaning that the proposed algorithm does not necessarily minimize the stated objective and theoretical properties of this estimator have not been established. In this paper, we propose a penalized likelihood approach that provides the flexibility of the nested lasso but is formulated as a convex optimization problem, which allows us to prove strong theoretical properties and to provide an efficient, scalable algorithm for computing the estimator. The theoretical development of our method allows us to make clear comparisons with known results for the graphical lasso (Rothman et al., 2008; Ravikumar et al., 2011) in the non-ordered case. Both methods are convex penalized likelihood approaches, so this comparison highlights the similarities and differences in the ordered and non-ordered problems. There are two key choices we make that lead to a convex formulation. First, we express the optimization problem in terms of the Cholesky factor L. The nested lasso and other methods (starting with Pourahmadi 1999) use the modified Cholesky decomposition, Ω= T T D 1T, where T is a lower-triangular matrix with ones on its diagonal and D is a diagonal matrix with positive entries. While L(Ω) is convex in Ω, the negative log-likelihood L(T T D 1T) is not jointly convex in T and D. By contrast, L LT L = log det LT L + 1 i=1 x T i LT Lxi = 2 r=1 log Lrr + 1 i=1 Lxi 2 2 (4) Yu and Bien is convex in L. This parametrization is considered in Aragam and Zhou (2015), Khare et al. (2014), and Khare et al. (2016). Maximum likelihood estimation of L preserves the regression interpretation by noting that r=1 log Lrr + 1 k=1 Lrkxik/Lrr This connection has motivated previous work with the modified Cholesky decomposition, in which Trk = Lrk/Lrr are the coefficients of a linear model in which Xr is regressed on its predecessors, and Drr = L 2 rr corresponds to the error variance. The second key choice is our use of a hierarchical group lasso in place of the nested lasso s nonconvex penalty. We introduce here some notation used throughout the paper. For two sequences of constants a(n) and b(n), the notation a(n) = o (b(n)) means that for every ε > 0, there exists a constant N > 0 such that |a(n)/b(n)| ε for all n N. And the notation a(n) = O (b(n)) means that there exists a constant N > 0 and a constant M > 0 such that |a(n)/b(n)| M for all n N. For a sequence of random variables A(n), the notation A(n) = OP (b(n)) means that for every ε > 0, there exists a constant M > 0 such that P (|A(n)/b(n)| > M) ε for all n. For a vector v = (v1, . . . , vp) Rp, we define v 1 = Pp j=1 |vj|, v 2 = (Pp j=1 v2 j )1/2 and v = maxj |vj|. For a matrix M Rn p, we define the element-wise norms by two vertical bars. Specifically, M = maxjk |Mjk| and Frobenius norm M F = (P j,k M2 jk)1/2. For q 1, we define the matrix-induced (operator) q-norm by three vertical bars: |||M|||q = max v q=1 Mv q. Important special cases include |||M|||2, also known as the spectral norm, which is the largest singular value of M, as well as |||M|||1 = maxk Pp j=1 |Mjk| and |||M||| = maxj Pp k=1 |Mjk|. Note that |||M|||1 = |||M||| when M is symmetric. Given a p-vector v, a p p matrix M, and an index set T, let v T = (vi)i T be the |T|-subvector and MT the p |T| submatrix with columns selected from T. Given a second index set T , let MTT be the |T| |T | submatrix with rows and columns of M indexed by T and T , respectively. Specifically, we use Lr to denote the r-th row of L. 2. Estimator For a given tuning parameter λ 0, we define our estimator ˆL to be a minimizer of the following penalized negative Gaussian log-likelihood ˆL arg min L:Lrr>0 Lrk=0 for r 0, this nested structure always puts more penalty on those elements that are further away from the diagonal. Since the group lasso has the effect of setting to zero a subset of groups, it is apparent that this choice of groups ensures that whenever the elements in gr,ℓare set to zero, elements in gr,ℓ are also set to zero for all ℓ ℓ. In other words, for each row of ˆL, the non-zeros are those elements within some (row-specific) distance of the diagonal. This is in contrast to the ℓ1-penalty as used in Khare et al. (2016), which produces sparsity patterns with no particular structure (compare the top-left and top-right panels of Figure 10). The choice of weights, wℓm, affects both the empirical and theoretical performance of the estimator. We focus primarily on a quadratically decaying set of weights, wℓm = 1 (ℓ m + 1)2 , (7) but also consider the unweighted case (in which wℓm = 1). The decay counteracts the fact that the elements of L appear in differing numbers of groups (for example Lr1 appears in r 1 groups whereas Lr,r 1 appears in just one group). In a related problem, Bien et al. (2016) choose weights that decay more slowly with ℓ m than (7). Our choice makes the enforcement of hierarchy weaker so that our penalty behaves more closely to the lasso penalty (Tibshirani, 1996). The choice of weight sequence in (7) is more amenable to theoretical analysis; however, in practice the unweighted case is more efficiently implemented and works well empirically. Problem (5) is convex in L. While log det( ) is strictly convex, P r log(Lrr) is not strictly convex in L. Thus, the arg min in (5) may not be unique. In Section 4, we provide sufficient conditions to ensure uniqueness with high probability. In Appendix A, we show that (5) decouples into p independent subproblems, each of which estimates one row of L. More specifically, let X Rn p be a sample matrix with independent rows xi N(0, Σ), ˆL11 = n1/2(XT 1 X1) 1/2 and for r = 2, . . . , p, ˆLr,1:r = arg min β Rr:βr>0 2 log βr + 1 n X1:rβ 2 2 + λ m=1 w2 ℓmβ2 m This observation means that the computation can be easily parallelized, which potentially can achieve a linear speed up with the number of CPU cores. Theoretically, to analyze the properties of ˆL it is easier to start by studying an estimator of each row, i.e., a solution to (8). We will see in Section 4 that problem (8) has connections to a penalized regression problem, meaning that both the assumptions and results we can derive are better than if we were working with a penalty based on Ω. Yu and Bien L11 0 0 0 0 L11 0 0 0 0 ˆL11 0 0 0 0 L21 L22 0 0 0 L21 L22 0 0 0 0 ˆL22 0 0 0 L31 L32 L33 0 0 L31 L32 L33 0 0 ˆL31 ˆL32 ˆL33 0 0 L41 L42 L43 L44 0 L41 L42 L43 L44 0 0 0 ˆL43 ˆL44 0 L51 L52 L53 L54 L55 L51 L52 L53 L54 L55 0 ˆL52 ˆL53 ˆL54 ˆL55 Figure 1: There are p 2 groups used in the penalty, with each row r having r 1 nested groups gr,1 gr,2 gr,r 1. Left: the group g4,3. Middle: the nested group structure g4,1 g4,2 g4,3. Right: A possible sparsity pattern in ˆL, where elements in g2,1, g4,2 (and thus g4,1) and g5,1 are set to zero. In light of the regression interpretation of (1), ˆL provides an interpretable notion of local dependence; however, we can of course also use our estimate of L to estimate Ω: ˆΩ= ˆLT ˆL. By construction, this estimator is both symmetric and positive definite. Unlike a lasso penalty, which would induce unstructured sparsity in the estimate of L and thus would not be guaranteed to produce a sparse estimate of Ω, the adaptively banded structure in our estimator of L can yield a generally banded ˆΩwith sparsity pattern determined by (3) (See the top-left and bottom-left panels in Figure 10 for an example). 3. Computation As observed above, we can compute ˆL by solving (in parallel across r) problem (8). Consider an alternating direction method of multipliers (ADMM) approach that solves the equivalent problem min β,γ Rr:βr>0 2 log βr + 1 n X1:rβ 2 2 + λ m=1 w2 ℓmγ2 m Algorithm 1 presents the ADMM algorithm, which repeatedly minimizes this problem s augmented Lagrangian over β, then over γ, and then updates the dual variable u Rr. The main computational effort in the algorithm is in solving (9) and (10). Note that (9) has a smooth objective function. Straightforward calculus gives the closed-form solution (see Appendix B for detailed derivation), β(t+1) r = B B2 8A 2A > 0 β(t+1) r = 2S(r) r, r + ρI 1 2S(r) r,rβ(t+1) r + u(t) r ργ(t) r , Learning Local Dependence In Ordered Data Algorithm 1 ADMM algorithm to solve (8) Require: β(0), γ(0), u(0), ρ > 0, t = 1. β(t) arg min β Rr:βr>0 2 log βr + 1 n X1:rβ 2 2 + β γ(t 1) T u(t 1) + ρ γ(t) arg min γ Rr γ β(t) ρ 1u(t 1) 2 m=1 w2 ℓmγ2 m 4: u(t) u(t 1) + ρ β(t) γ(t) 6: until convergence 7: return γ(t) n XT 1:r X1:r A = 4S(r) r, r 2S(r) r, r + ρI 1 S(r) r,r 2S(r) r,r ρ < 0 B = 2S(r) r, r 2S(r) r, r + ρI 1 u(t) r ργ(t) r u(t) r + ργ(t) r . The closed-form update above involves matrix inversion. With ρ > 0, the matrix 2S(r) r, r +ρI is invertible even when r > n. Since determining a good choice for the ADMM parameter ρ is in general difficult, we adapt the dynamic ρ updating scheme described in Section 3.4.1 of Boyd et al. (2011). Solving (10) requires evaluating the proximal operator of the hierarchical group lasso with general weights. We adopt the strategy developed in Bien et al. (2016) (based on a result of Jenatton et al. 2011), which solves the dual problem of (10) by performing Newton s method on at most r 1 univariate functions. The detailed implementation is given in Algorithm 3 in Appendix C. Each application of Newton s method corresponds to performing an elliptical projection, which is a step of blockwise coordinate ascent on the dual of (10) (see Appendix D for details). Finally we observe in Algorithm 2 that for the unweighted case (wℓm = 1), solving (10) is remarkably efficient. The R package varband provides C++ implementations of Algorithms 1 and 2. 4. Statistical Properties In this section we study the statistical properties of our estimator. In what follows, we consider a lower triangular matrix L having row-specific bandwidths, Kr. The first Jr = Yu and Bien Algorithm 2 Algorithm for solving (10) for unweighted estimator Require: β(t), u(t 1) Rr, λ, ρ > 0. 1: Initialize γ(t) = β(t) + u(t 1)/ρ and τ = λ/ρ 2: for ℓ= 1, . . . , r 1 do 3: return γ(t). r 1 Kr elements of row r are zero, and the band of non-zero off-diagonals (of size Kr) is denoted Ir = {Jr + 1, . . . , r 1}. We also denote Ic r = {1, 2, . . . , r} \ Ir. See Figure 2 for a graphical example of K5, J5, I4, and Ic 4. L11 0 0 0 0 0 L22 0 0 0 L31 L32 L33 0 0 0 L42 L43 L44 0 0 0 L53 L54 L55 I4 = {2, 3}, Ic 4 = {1, 4} J5 = 2 K5 = 2 Figure 2: Schematic showing Jr, Kr, Ir, and Ic r. Our theoretical analysis is built on the following assumptions: A1 Gaussian assumption: The sample matrix X Rn p has n independent rows with each row xi drawn from N(0, Σ). A2 Sparsity assumption: The true Cholesky factor L Rp p is the lower triangular matrix with positive diagonal elements such that the precision matrix Ω= Σ 1 = LT L. The matrix L has row-specific bandwidths Kr such that Lrj = 0 for 0 < j < r Kr. A3 Irrepresentable condition: There exists some α (0, 1] such that max 2 r p max ℓ Icr ΣℓIr (ΣIr Ir) 1 1 6 A4 Bounded singular values: There exists a constant κ such that 0 < κ 1 σmin (L) σmax (L) κ Learning Local Dependence In Ordered Data When maxr Kr < n, the Gaussianity assumption A1 implies that XIr has full column rank for all r with probability one. Our analysis applies to the general high-dimensional scaling scheme where Kr = Kr(n) and p = p(n) can grow with n. For r = 2, . . . , p and ℓ Ic r = {1, . . . , Jr, r}, let θ(ℓ) r := Var (Xℓ|XIr) and θr := max ℓ Icr θ(ℓ) r . By Assumption A1, θ(ℓ) r = Σℓℓ ΣℓIr (ΣIr Ir) 1 ΣIrℓrepresents the noise variance when regressing Xℓon XIr, i.e., for ℓ= 1, . . . , Jr, r, Xℓ= ΣℓIr(ΣIr Ir) 1XT Ir + Eℓ with Eℓ N 0, θ(ℓ) r . (11) In words, θ(ℓ) r measures the degree to which Xℓcannot be explained by the variables in the support and θr is the maximum such value over all ℓoutside of the support Ir in the r-th row. Intuitively, the difficulty of the estimation problem increases with θr. Note that for r = 1, . . . , p, (1) implies θ(r) r = 1/L2 rr. Assumption A3 (along with the βmin condition) is essentially a necessary and sufficient condition for support recovery of lasso-type methods (see, e.g., Zhao and Yu, 2006; Meinshausen and B uhlmann, 2006; Wainwright, 2009; Van de Geer and B uhlmann, 2009; Ravikumar et al., 2011). The constant α (0, 1] is usually referred to as the irrepresentable (incoherence) constant (Wainwright, 2009). Intuitively, the irrepresentable condition requires low correlations between signal and noise predictors, and thus a value of α that is close to 1 implies that recovering the support is easier to achieve. The constant 6π 2 is determined by the choice of weight (7) and can be eliminated by absorbing its reciprocal into the definition of the weights wℓm. Doing so, one finds that our irrepresentable condition is essentially the same as the one found in the regression setting (Wainwright, 2009) despite the fact that our goal is estimating a precision matrix. Assumption A4 is a bounded singular value condition. Recalling that Ω= LT L, 0 < κ 2 σmin (Σ) σmax (Σ) κ2, (12) which is equivalent to the commonly used bounded eigenvalue condition in other literatures. 4.1 Row-Specific Results We start by analyzing support recovery properties of our estimator for each row, i.e., the solution to the subproblem (8). For r > n, the Hessian of the negative log-likelihood is not positive definite, meaning that the objective function may not be strictly convex in β and the solution not necessarily unique. Intuitively, if the tuning parameter λ is large, the resulting row estimate ˆLr is sparse and thus includes most variation in a small subset of the r variables. More specifically, for large λ, ˆIr Ir and thus by Assumption A1, XˆIr has full rank, which implies that ˆLr is unique. The series of technical lemmas in Appendix E precisely characterizes the solution. The first part of the theorem below shows that with an appropriately chosen tuning parameter λ the solution to (8) is sparse enough to be unique and that we will not overestimate the true bandwidth. Knowing that the support of the unique row estimator ˆLr is Yu and Bien contained in the true support reduces the dimension of the parameter space, and thus leads to a reasonable error bound. Of course, if our goal were simply to establish the uniqueness of ˆLr and that ˆKr Kr, we could trivially take λ = (resulting in ˆKr = 0). The latter part of the theorem thus goes on to provide a choice of λ that is sufficiently small to guarantee that ˆKr = Kr (and, furthermore, that the signs of all non-zeros are correctly recovered). Theorem 1 Consider the family of tuning parameters and weights given by (7). Under Assumptions A1 A4, if the tuple (n, Jr, Kr) satisfies n > α 2 3π2Kr + 8 θrκ2 log Jr, (14) then with probability greater than 1 c1 exp { c2 min(Kr, log Jr)} 7 exp ( c3n) for some constants c1, c2, c3 independent of n and Jr, the following properties hold: 1. The row problem (8) has a unique solution ˆLr and ˆKr Kr. 2. The estimate ˆLr satisfies the element-wise ℓ bound, ˆLr Lr λ 4 (ΣIr Ir) 1 + 5κ2 . (15) 3. If in addition, min j Jr+1 |Lrj| > λ 4 (ΣIr Ir) 1 + 5κ2 , (16) then exact signed support recovery holds: For all j r, sign(ˆLrj) = sign(Lrj). Proof See Appendix F. In the classical setting where the ambient dimension r is fixed and the sample size n is allowed to go to infinity, λ 0 and the above scaling requirement is satisfied. By (15) the row estimator ˆLr is consistent as is the classical maximum likelihood estimator. Moreover, it recovers the true support since (16) holds automatically. In high-dimensional scaling, however, both n and r are allowed to change, and we are interested in the case where r can grow much faster than n. Theorem 1 shows that, if (ΣIr Ir) 1 = O(1) and if n can grow as fast as Kr log Jr, then the row estimator ˆLr still recovers the exact support of Lr when the signal is at least O( q n ) in size, and the estimation error maxj |ˆLrj Lrj| n ). Intuitively, for the row estimator to detect the true support, we require that the true signal be sufficiently large. The condition (16) imposes limitations on how fast the signal is allowed to decay, which is the analogue to the commonly known βmin condition that is assumed for establishing support recovery of the lasso. Learning Local Dependence In Ordered Data Remark 2 Both the choice of tuning parameter (13) and the error bound (15) depend on the true covariance matrix via θr. This quantity can be bounded by κ2 as in (12) using the fact that (ΣIr Ir) 1 is positive definite: θr = max ℓ Icr θ(ℓ) r = max ℓ Icr n Σℓℓ ΣℓIr (ΣIr Ir) 1 ΣIrℓ o max ℓ Icr Σℓℓ κ2. The proof of Theorem 1 shows that the results in this theorem still hold true if we replace θr by κ2. This observation leads to the fact that we can select a tuning parameter having the properties of the theorem that does not depend on the unknown sparsity level Kr. Therefore, our estimator is adaptive to the underlying unknown bandwidths. 4.1.1 Connections to the regression setting In (1) we showed that estimation of the r-th row of L can be interpreted as a regression of Xr on its predecessors. It is thus very interesting to compare Theorem 1 to the standard high-dimensional regression results. Consider the following linear model of a vector y Rn of the form y = Zη + ω ω N(0, σ2In) (17) where η Rp is the unknown but fixed parameter to estimate, Z Rn p is the design matrix with each row an observation of p predictors, σ2 is the variance of the zero-mean additive noise ω. A standard approach in the high-dimensional setting where p n is the lasso (Tibshirani, 1996), which solves the convex optimization problem, min η Rp 1 2n y Zη 2 2 + λ η 1 , (18) where λ > 0 is a regularization parameter. In the setting where η is assumed to be sparse, the lasso solution is known to be able to successfully recover the signed support of the true η with high probability when λ is of the scale σ q n and certain technical conditions are satisfied (Wainwright, 2009). Despite the added complications of working with the log term in the objective of (8), Theorem 1 gives a clear indication that, in terms of difficulty of support recovery, the row estimate problem (8) is essentially the same as a lasso problem with random design, i.e., with each row zi N(0, Σ) (Theorem 3, Wainwright, 2009). Indeed, a comparison shows that the two irrepresentable conditions are equivalent. Moreover, θr plays the same role as Wainwright (2009) s maxi ΣSc Sc ΣSc S (ΣSS) 1 ΣSSc ii, a threshold constant of the conditional covariance, where S is the support of the true η. St adler et al. (2010) introduce an alternative approach to the lasso, in the context of penalized mixture regression models, that solves the optimization problem, (ˆφ, ˆρ) = arg min φ,ρ 2 log ρ + 1 n ρy + Zφ 2 2 + λ φ 1 where ˆσ = ˆρ 1 and ˆη = ˆφ/ˆρ. Note that (19) basically coincides with (8) except for the penalty. Yu and Bien In St adler et al. (2010), the authors study the asymptotic and non-asymptotic properties of the ℓ1-penalized estimator for the general mixture regression models where the loss functions are non-convex. The theoretical properties of (19) are studied in Sun and Zhang (2010), which partly motivates the scaled lasso (Sun and Zhang, 2012). The theoretical work of Sun and Zhang (2010) differs from ours both in that they study the ℓ1 penalty (instead of the hierarchical group lasso) and in their assumptions. The nature of our problem requires the sample matrix to be random (as in A1), while Sun and Zhang (2010) considers the fixed design setting, which does not apply in our context. Moreover, they provide prediction consistency and a deviation bound of the regression parameters estimation in ℓ1 norm. We give exact signed support recovery results for the regression parameters as well as estimation deviation bounds in various norm criteria. Also, they take an asymptotic point of view while we give finite sample results. 4.2 Matrix Bandwidth Recovery Result With the properties of the row estimators in place, we are ready to state results about estimation of the matrix L. The following theorem gives an analogue to Theorem 1 in the matrix setting. Under similar conditions, with one particular choice of tuning parameter, the estimator recovers the true bandwidth for all rows adaptively with high probability. Theorem 3 Let θ = maxr θr and K = maxr Kr, and take and weights given by (7). Under Assumptions A1 A4, if (n, p, K) satisfies n > α 2θκ2 12π2K + 32 log p, (21) then with probability greater than 1 cp 1 for some constant c independent of n and p, the following properties hold: 1. The estimator ˆL is unique, and it is at least as sparse as L, i.e., ˆKr Kr for all r. 2. The estimator ˆL satisfies the element-wise ℓ bound, ˆL L λ 4 max r (ΣIr Ir) 1 + 5κ2 . (22) 3. If in addition, min r min j Jr+1 |Lrj| > λ 4 max r (ΣIr Ir) 1 + 5κ2 , (23) then exact signed support recovery holds: sign(ˆLrj) = sign(Lrj) for all r and j. Proof See Appendix G. As discussed in Remark 2, we can replace θ with its upper bound κ2, and the results remain true. This theorem shows that one can properly estimate the sparsity pattern across all rows Learning Local Dependence In Ordered Data exactly using only one tuning parameter chosen without any prior knowledge of the true bandwidths. In Section 4.1.1, we noted that the conditions required for support recovery and the element-wise ℓ error bound for estimating a row of L is similar to those of the lasso in the regression setting. A union bound argument allows us to translate this into exact bandwidth recovery in the matrix setting and to derive a reasonable convergence rate under conditions as mild as that of a lasso problem with random design. This technique is similar in spirit to neighborhood selection (Meinshausen and B uhlmann, 2006), though our approach is likelihood-based. Comparing (21) to (14), we see that the sample size requirement for recovering L is determined by the least sparse row. While intuitively one would expect the matrix problem to be harder than any single row problem, we see that in fact the two problems are basically of the same difficulty (up to a multiplicative constant). In the setting where variables exhibit a natural ordering, Shojaie and Michailidis (2010) proposed a penalized likelihood framework like ours to estimate the structure of directed acyclic graphs (DAGs). Their method focuses on variables which are standardized to have unit variance. In this special case, penalized likelihood does not involve the log-determinant term and under similar assumptions to ours, they proved support recovery consistency. However, they use lasso and adaptive lasso (Zou, 2006) penalties, which do not have the built-in notion of local dependence. Since these ℓ1-type penalties do not induce structured sparsity in the Cholesky factor, the resulting precision matrix estimate is not necessarily sparse. By contrast, our method does not assume unit variances and learns an adaptively banded structure for ˆL that leads to a sparse ˆΩ(thereby encoding conditional dependencies). To study the difference between the ordered and non-ordered problems, we compare our method with Ravikumar et al. (2011), who studied the graphical lasso estimator in a general setting where variables are not necessarily ordered. Let S index the edges of the graph specified by the sparsity pattern of Ω= Σ 1. The sparsity recovery result and convergence rate are established under an irrepresentable condition imposed on Γ = Σ Σ Rp2 p2: Γe S (ΓSS) 1 1 (1 α) (24) for some α (0, 1]. Our Assumption A3 is on each variable through the entries of the true covariance Σ while (24) imposes such a condition on the edge variables Y(j,k) = Xj Xk E (Xj Xk), resulting in a vector ℓ1-norm restriction on a much larger matrix Γ, which can be more restrictive for large p. More specifically, condition (24) arises in Ravikumar et al. (2011) to tackle the analysis of the log det Ωterm in the graphical lasso problem. By contrast, in our setting the parameterization in terms of L means that the log det term is simply a sum of log terms on diagonal elements and is thus easier to deal with, leading to the milder irrepresentable assumption. Another difference is that they require the sample size n > cκ2 Γd2 log p for some constant c. The quantity d measures the maximum number of non- zero elements in each row of the true Σ, which in our case is 2K +1, and κΓ = (ΓSS) 1 can be much larger than κ2. Thus, comparing to (21), one finds that their sample size requirement is much more restrictive. A similar comparison could also be made with the lasso penalized D-trace estimator (Zhang and Zou, 2014), whose irrepresentable condition involves Γ = (Σ I + I Σ)/2 Rp2 p2. Of course, the results in both Ravikumar et al. Yu and Bien (2011) and Zhang and Zou (2014) apply to estimators invariant to permutation of variables; additionally, the random vector only needs to satisfy an exponential-type tail condition. 4.3 Precision Matrix Estimation Consistency Although our primary target of interest is L, the parameterization Ω= LT L makes it natural for us to try to connect our results of estimating L with the vast literature in directly estimating Ω, which is the standard estimation target when the known ordering is not available. In this section, we consider the estimation consistency of Ωusing the results we obtained for L. The following theorem gives results of how well ˆΩ= ˆLT ˆL performs in estimating the true precision matrix Ω= LT L in terms of various matrix norm criteria. Theorem 4 Let θ = maxr θr, K = maxr Kr and s = P r Kr denote the total number of non-zero off-diagonal elements in L. Define ζΣ = 8 2θ α 4 maxr (ΣIr Ir) 1 + 5κ2 . Under the assumptions in Theorem 3, the following deviation bounds hold with probability greater than 1 cp 1 for some constant c independent of n and p: ˆΩ Ω 2ζΣ|||L||| n + ζ2 Σ (K + 1) log p ˆΩ Ω 2ζΣ|||L||| (K + 1) n + ζ2 Σ (K + 1)2 log p ˆΩ Ω 2 2ζΣ|||L||| (K + 1) n + ζ2 Σ (K + 1)2 log p ˆΩ Ω F 2κζΣ (s + p) log p n + ζ2 Σ (K + 1) s + plog p When the quantities ζΓ, |||L||| , and κ are treated as constants, these bounds can be summarized more succinctly as follows: Proof See Appendix H. Corollary 5 Using the notation and conditions in Theorem 4, if ζΓ, |||L||| , and κ remain constant, then the scaling (K + 1)2 log p = o(n) is sufficient to guarantee the following estimation error bounds: ˆΩ Ω 2 = OP ˆΩ Ω F = OP (s + p) log p Learning Local Dependence In Ordered Data The conditions for these deviation bounds to hold are those required for support recovery as in Theorem 3. In many cases where estimation consistency is more of interest than support recovery, we can still deliver the desired error rate in Frobenius norm, matching the rate derived in Rothman et al. (2008). In particular, we can drop the strong irrepresentable assumption (A3) and weaken the Gaussian assumption (A1) to the following marginal sub-Gaussian assumption: A5 Marginal sub-Gaussian assumption: The sample matrix X Rn p has n independent rows with each row drawn from the distribution of a zero-mean random vector X = (X1, , Xp)T with covariance Σ and sub-Gaussian marginals, i.e., E exp t Xj/ p Σjj exp Ct2 for all j = 1, . . . , p, t 0 and for some constant C > 0 that does not depend on j. Theorem 6 Under Assumption A2, A4 and A5, with tuning parameter λ of scale q n and weights as in (7), the scaling (s + p) log p = o(n) is sufficient for the following estimation error bounds in Frobenius norm to hold: ˆL L F = OP (s + p) log p ˆΩ Ω F = OP (s + p) log p Proof See Appendix I. The rates in Corollary 5 (and Theorem 6) essentially match the rates obtained in methods that directly estimate Ω(e.g., the graphical lasso estimator, studied in Rothman et al. 2008, Ravikumar et al. 2011, and the column-by-column methods as in Cai et al. 2011, Liu and Wang 2012, and Sun and Zhang 2013). However, the exact comparison in rates with these methods is not straightforward. First, the targets of interest are different. In the setting where the variables have a known ordering, we are more interested in the structural information among variables that is expressed in L, and thus accurate estimation of L is more important. When such ordering is not available as considered in Rothman et al. (2008); Cai et al. (2011); Liu and Wang (2012) and so on, however, the conditional dependence structure encoded by the sparsity pattern in Ωis more of interest, and the accuracy of directly estimating Ωis the focus. Moreover, deviation bounds of different methods are built upon assumptions that treat different quantities as constants. Quantities that are assumed to remain constant in the analysis of one method might actually be allowed to scale with ambient dimension in a nontrivial manner in another method, which makes direct rate comparison among different methods complicated and less illuminating. Our analysis can be extended to the unweighted version of our estimator, i.e., with weight wℓm = 1, but under more restrictive conditions and with slower rates of convergence. Specifically, Assumption A3 becomes maxℓ Icr ΣℓIr (ΣIr Ir) 1 1 (1 α) /Kr for each Yu and Bien r = 2, . . . , p. With the same tuning parameter choice (13) and (20), the terms of Kr and K in sample size requirements (14) and (21) are replaced with K2 r and K2, respectively. The estimation error bounds in all norms are multiplied by an extra factor of K. All of the above indicates that in highly sparse situations (in which K is very small), the unweighted estimator has very similar theoretical performance to the weighted estimator. 5. Simulation Study In this section we study the empirical performance of our estimators (both with weights as in (7) and with no weights, i.e., wℓm = 1) on simulated data. For comparison, we include two other sparse precision matrix estimators designed for the ordered-variable case: Non-Adaptive Banding (Bickel and Levina, 2008): This method estimates L as a lower-triangular matrix with a fixed bandwidth K applying across all rows. The regularization parameter used in this method is the fixed bandwidth K. Nested Lasso (Levina et al., 2008): This method yields an adaptive banded structure by solving a set of penalized least-squares problems (both the loss function and the nested-lasso penalty are non-convex). The regularization parameter controls the amount of penalty and thus the sparsity level of the resulting estimate. All simulations are run at a sample size of n = 100, where each sample is drawn independently from the p-dimensional normal distribution N(0, (LT L) 1). We compare the performance of our estimators with the methods above both in terms of support recovery (in Section 5.1) and in terms of how well ˆL estimates L (in Section 5.2). For support recovery, we consider p = 200 and for estimation accuracy, we consider p = 50, 100, 200, which corresponds to settings where p < n, p = n, and p > n, respectively. We simulate under the following models for L. We adapt the parameterization L = D 1T as in Khare et al. (2016), where D is a diagonal matrix with diagonal elements drawn randomly from a uniform distribution on the interval [2, 5], and T is a lower-triangular matrix with ones on its diagonal and off-diagonal elements defined as follows: Model 1: Model 1 is at one extreme of bandedness of the Cholesky factor L, in which we take the lower triangular matrix L Rp p to have a strictly banded structure, with each row having the same bandwidth Kr = K = 1 for all r. Specifically, we take Tr,r = 1, Tr,r 1 = 0.8 and Tr,j = 0 for j < r 1. Model 2: Model 2 is at the other extreme, in which we allow Kr to vary with r. We take T to be a block diagonal matrix with 5 blocks, each of size p/5. Within each block, with probability 0.5 each row r is assigned with a non-zero bandwidth that is randomly drawn from a uniform distribution on {1, . . . , r 1} (for r > 1). Each non-zero element in T is then drawn independently from a uniform distribution on the interval [0.1, 0.4], and is assigned with a positive/negative sign with probability 0.5. Model 3: Model 3 is a denser and thus more challenging version of Model 2, with T a block diagonal matrix with only 2 blocks. Each of the blocks is of size p/2 but is otherwise generated as in Model 2. Learning Local Dependence In Ordered Data Model 1 Model 2 Model 3 Model 4 Figure 3: Schematic of four simulation scenarios with p = 100: (from left to right) Model 1 is strictly banded, Model 2 has small variable bandwidth, Model 3 has large variable bandwidth, and Model 4 is block-diagonal. Black, gray, and white stand for positive, negative, and zero entries, respectively. The proportion of elements that are non-zero is 4%, 6%, 15%, and 26%, respectively. Model 4: Model 4 is a dense block diagonal model. The matrix T has a completely dense lower-triangular block from the p/4-th row to the 3p/4-th row and is zero everywhere else. Within this block, all off-diagonal elements are drawn uniformly from [0.1, 0.2], and positive/negative signs are then assigned with probability 0.5. Model 1 is a stationary autoregressive model of order 1. By the regression interpretation (1), for each r, it can be verified that the autoregressive polynomial of the r-th row of Models 2, 3, and 4 has all roots outside the unit circle, which characterizes stationary autoregressive models of orders equal to the corresponding row-wise bandwidths. See Figure 3 for examples of the four sparsity patterns for p = 100. The non-adaptive banding method should benefit from Model 1 while the nested lasso and our estimators are expected to perform better in the other three models where each row has its own bandwidth. For all four models and every value of p considered, we verified that Assumptions A3 and A4 hold and then simulated n = 100 observations according to each of the four models based on Assumption A1. 5.1 Support Recovery We first study how well the different estimators identify zeros in the four models above. We generate n = 100 random samples from each model with p = 200. The tuning parameter λ 0 in (5) measures the amount of regularization and determines the sparsity level of the estimator. We use 100 tuning parameter values for each estimator and repeat the simulation 10 times. Figure 4 shows the sensitivity (fraction of true non-zeros that are correctly recovered) and specificity (fraction of true zeros that are correctly set to zero) of each method parameterized by its tuning parameter (in the case of non-adaptive banding, the parameter is the bandwidth itself, ranging from 0 to p 1). Each set of 10 curves of the same color corresponds to the results of one estimator, and each curve within the set corresponds to Yu and Bien the result of one draw from 10 simulations. Curves closer to the upper-right corner indicate better classification performance (the x + y = 1 line corresponds to random guessing). The sparsity level of the non-adaptive banding estimator depends only on the prespecified bandwidth (which is the method s tuning parameter) and not on the data itself. Consequently, the sensitivity-specificity curves for the non-adaptive banding do not vary across replications when simulating from a particular underlying model. The sparsity levels of the nested lasso and our methods, by contrast, hinge on the data, thus giving a different curve for each replication. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Model 1 (n = 100, p = 200) specificity sensitivity weighted unweighted nonadaptive nested 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Model 2 (n = 100, p = 200) specificity sensitivity weighted unweighted nonadaptive nested 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Model 3 (n = 100, p = 200) specificity sensitivity weighted unweighted nonadaptive nested 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Model 4 (n = 100, p = 200) specificity sensitivity weighted unweighted nonadaptive nested Figure 4: ROC curves showing support recovery when the true L (top-left) is strictly banded, (top-right) has small variable bandwidth, (bottom-left) has large variable bandwidth, and (bottom-right) is block-diagonal, over 10 replications. In practice, we find that our methods and the nested lasso sometimes produce entries with very small, but non-zero, absolute values. To study support recovery, we set all estimates whose absolute values are below 10 10 to zero, both in our estimators and the nested lasso. Learning Local Dependence In Ordered Data In Model 1, we observe that all methods considered attain perfect classification accuracy for some value of their tuning parameter. While the non-adaptive approach is guaranteed to do so in this scenario, it is reassuring to see that the more flexible methods can still perfectly recover this sparsity pattern. In Model 2, we observe that our two methods outperform the nested lasso, which itself, as expected, outperforms the non-adaptive banding method. As the model becomes more challenging (from Model 2 to Model 4), the performances of all four methods start deteriorating. Interestingly, the nested lasso no longer retains its advantage over non-adaptive banding in Models 3 and 4, while the performance advantage of our methods become even more substantial. The fact that the unweighted version of our method outperforms the weighted version stems from the fact that all models are comparatively sparse for p = 200, and so the heavier penalty on each row delivered by the unweighted approach recovers the support more easily than the weighted version. 5.2 Estimation Accuracy We proceed by comparing the estimators in terms of how far ˆL is from L. To this end, we generate n = 100 random samples from the four models with p = 50, p = 100, and p = 200. Each method is computed with its tuning parameter selected to maximize the Gaussian likelihood on the validation data in a 5-fold cross-validation. For comparison, we report the estimation accuracy of each estimate in terms of the scaled Frobenius norm F , the matrix infinity norm ˆL L , the spectral norm ˆL L 2, and the (scaled) Kullback-Leibler loss 1 p h tr(Ω 1 ˆΩ) log det(Ω 1 ˆΩ) p i (Levina et al., 2008). The simulation is repeated 50 times, and the results are summarized in Figure 5 through Figure 8. Each figure corresponds to a model, and consists of a 4-by-3 panel layout. Each row corresponds to an error measure, and each column corresponds to a value of p. As expected, the non-adaptive banding estimator does better than the other estimators in Model 1. In Models 2, 3, and 4, where bandwidths vary with row, our estimators and the nested lasso outperform non-adaptive banding. A similar pattern is observed as in support recovery. As the model becomes more complex and p gets larger, the performance of the nested lasso degrades and gradually becomes worse than non-adaptive banding. By contrast, as the estimation problem becomes more difficult, the advantage in performance of our methods becomes more obvious. We again observe that the unweighted estimator performs better than the weighted one. As shown in Section 4, the overall performance of our method hinges on the underlying model complexity (measured in terms of maxr Kr) as well as the relative size of n and p. When n is relatively small, usually a more constrained method (like the unweighted estimator) is preferred over a more flexible method (like the weighted estimator). So in our simulation setting, it is reasonable to observe that the unweighted method works better. Note that as the underlying L becomes denser (from Model 1 to Model 4), the performance difference between the weighted and the unweighted estimator diminishes. This corroborates our discussion in the end of Section 4 that the performance of the unweighted estimator becomes worse when the underlying model is dense. Yu and Bien Model 1 (scaled) Frobenius norm p = 50 0.000 0.001 0.002 0.003 0.004 0.005 Non Adaptive Model 1 (scaled) Frobenius norm p = 100 0.000 0.001 0.002 0.003 0.004 0.005 Non Adaptive Model 1 (scaled) Frobenius norm p = 200 0.000 0.001 0.002 0.003 0.004 0.005 Non Adaptive Model 1 Spectral norm p = 50 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Non Adaptive Model 1 Spectral norm p = 100 0.0 0.1 0.2 0.3 Non Adaptive Model 1 Spectral norm p = 200 0.0 0.1 0.2 0.3 Non Adaptive Model 1 Matrix Inf norm p = 50 0.00 0.05 0.10 0.15 0.20 0.25 Non Adaptive Model 1 Matrix Inf norm p = 100 0.00 0.05 0.10 0.15 0.20 Non Adaptive G Model 1 Matrix Inf norm p = 200 0.00 0.05 0.10 0.15 0.20 Non Adaptive Model 1 (scaled) KL divergence p = 50 0.00 0.01 0.02 0.03 0.04 0.05 Non Adaptive Model 1 (scaled) KL divergence p = 100 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Non Adaptive Model 1 (scaled) KL divergence p = 200 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Non Adaptive Figure 5: Estimation accuracy when data are generated from Model 1, which is strictly banded. Learning Local Dependence In Ordered Data Model 2 (scaled) Frobenius norm p = 50 0.000 0.002 0.004 0.006 Non Adaptive Model 2 (scaled) Frobenius norm p = 100 0.000 0.004 0.008 0.012 Non Adaptive Model 2 (scaled) Frobenius norm p = 200 0.000 0.010 0.020 0.030 Non Adaptive Model 2 Spectral norm p = 50 0.0 0.2 0.4 0.6 0.8 Non Adaptive Model 2 Spectral norm p = 100 0.0 0.5 1.0 1.5 Non Adaptive Model 2 Spectral norm p = 200 Non Adaptive Model 2 Matrix Inf norm p = 50 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Non Adaptive Model 2 Matrix Inf norm p = 100 0.0 0.1 0.2 0.3 0.4 Non Adaptive Model 2 Matrix Inf norm p = 200 0.0 0.2 0.4 0.6 Non Adaptive Model 2 (scaled) KL divergence p = 50 0.00 0.02 0.04 0.06 Non Adaptive Model 2 (scaled) KL divergence p = 100 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Non Adaptive Model 2 (scaled) KL divergence p = 200 0.00 0.05 0.10 0.15 0.20 0.25 Non Adaptive Figure 6: Estimation accuracy when data are generated from Model 2, which has small variable bandwidth. Yu and Bien Model 3 (scaled) Frobenius norm p = 50 0.000 0.005 0.010 0.015 0.020 0.025 Non Adaptive Model 3 (scaled) Frobenius norm p = 100 0.00 0.01 0.02 0.03 0.04 Non Adaptive GGG Model 3 (scaled) Frobenius norm p = 200 0.00 0.02 0.04 0.06 0.08 0.10 Non Adaptive Model 3 Spectral norm p = 50 0.0 0.5 1.0 1.5 2.0 2.5 Non Adaptive G Model 3 Spectral norm p = 100 Non Adaptive Model 3 Spectral norm p = 200 Non Adaptive Model 3 Matrix Inf norm p = 50 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Non Adaptive G Model 3 Matrix Inf norm p = 100 0.0 0.2 0.4 0.6 0.8 Non Adaptive Model 3 Matrix Inf norm p = 200 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Non Adaptive Model 3 (scaled) KL divergence p = 50 0.00 0.05 0.10 0.15 0.20 Non Adaptive Model 3 (scaled) KL divergence p = 100 0.0 0.1 0.2 0.3 Non Adaptive Model 3 (scaled) KL divergence p = 200 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Non Adaptive Figure 7: Estimation accuracy when data are generated from Model 3, which has large variable bandwidth. Learning Local Dependence In Ordered Data Model 4 (scaled) Frobenius norm p = 50 0.000 0.005 0.010 0.015 Non Adaptive Model 4 (scaled) Frobenius norm p = 100 0.000 0.005 0.010 0.015 0.020 0.025 0.030 Non Adaptive G Model 4 (scaled) Frobenius norm p = 200 0.00 0.02 0.04 0.06 Non Adaptive Model 4 Spectral norm p = 50 0.0 0.5 1.0 1.5 Non Adaptive Model 4 Spectral norm p = 100 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Non Adaptive G GG Model 4 Spectral norm p = 200 0 1 2 3 4 5 6 7 Non Adaptive Model 4 Matrix Inf norm p = 50 0.0 0.1 0.2 0.3 0.4 Non Adaptive Model 4 Matrix Inf norm p = 100 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Non Adaptive GG Model 4 Matrix Inf norm p = 200 0.0 0.2 0.4 0.6 0.8 Non Adaptive Model 4 (scaled) KL divergence p = 50 0.00 0.05 0.10 0.15 Non Adaptive G Model 4 (scaled) KL divergence p = 100 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Non Adaptive Model 4 (scaled) KL divergence p = 200 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Non Adaptive Figure 8: Estimation accuracy when data are generated from Model 4, which is blockdiagonal. Yu and Bien 6. Applications to Data Examples In this section, we illustrate the practical merits of our proposed method by applying it to two data examples. We start with an application to genomic data where our method can help model the local correlations along the genome. In Section 6.2 we compare our method with other estimators within the context of a sound recording classification problem. 6.1 An Application to Genomic Data We consider an application of our estimator to modeling correlation along the genome. Genetic mutations that occur close together on a chromosome are more likely to be coinherited than mutations that are located far apart (or on separate chromosomes). This leads to local correlations between genetic variants in a population. Biologists refer to this local dependence as linkage disequilibrium (LD). The width of this dependence is known to vary along the genome due to the variable locations of recombination hotspots, which suggests that adaptively banded estimators may be quite suitable in these contexts. We study Hap Map phase 3 data from the International Hap Map project (Consortium et al., 2010). The data consist of n = 167 humans from the YRI (Yoruba in Ibadan, Nigeria) population, and we focus on p = 201 consecutive tag SNPs on chromosome 22 (after filtering out infrequent sites with minor allele frequency 10%). While tag SNP data, which take discrete values {0, 1, 2}, are non-Gaussian, we argue that our estimator is still sensible to use in this case. First, the parameterization Ω= LT L does not depend on the Gaussian assumption. Moreover the estimator corresponds to minimizing a penalized Bregman divergence of the log-determinant function (Ravikumar et al., 2011). Furthermore, the least-squares term in (5) can be interpreted as minimizing the prediction error in the linear models (1) while the log terms act as log-barrier functions to impose positive diagonal entries (which ensures that the resulting ˆL is a valid Cholesky factor). To gauge the performance of our estimator on modeling LD, we randomly split the 167 samples into training and testing sets of sizes 84 and 83, respectively. Along a path of tuning parameters with decreasing values, estimators ˆL are computed on the training data. To evaluate ˆL on a vector x from the test data set, we can compute the error in predicting ˆLrr xr using Pr 1 k=1 ˆLr,k xk via (1) for each r, giving the error err( x) = 1 p 1 k=1 ˆLrj xk This quantity (with mean and the standard deviation over test samples) is reported in Figure 9 for our estimator under the two weighting schemes. Recall that the quadratically decaying weights (7) act essentially like the ℓ1 penalty. For numerical comparison, we also include the result of the estimator with ℓ1 penalty, which is the CSCS (Convex Sparse Cholesky Selection) method proposed in Khare et al. (2016). For both the non-adaptive banding and the nested lasso methods, we found that their implementations fail to work due to the collinearity of the columns of X. Figure 9 shows that our estimators are effective in improving modeling performance over a diagonal estimator (attained when λ is sufficiently large) and strongly outperform Learning Local Dependence In Ordered Data 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.8 0.9 1.0 1.1 1.2 1.3 tuning parameter prediction error weighted 1 se weighted 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.8 0.9 1.0 1.1 1.2 1.3 tuning parameter prediction error unweighted 1 se unweighted 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.8 0.9 1.0 1.1 1.2 1.3 tuning parameter prediction error CSCS 1 se CSCS Figure 9: Prediction error (computed on an independent test set) of the weighted (left), unweighted (middle), and CSCS (right) estimators. the plain MLE (as evidenced by the sharp increase in prediction error as λ 0). As expected, the weighted estimator performs very similarly to the CSCS estimator, which uses the ℓ1 penalty. Both of these perform better than the unweighted one. However, the sparsity pattern obtained by the two penalties are different (as shown in Figure 10). In Figure 10 we show the recovered signed support of the weighted, unweighted, and CSCS estimators and their corresponding precision matrices. Black, gray, and white stand for positive, negative, and zero entries, respectively. Tuning parameters are chosen using the one-standard-error rule (see, e.g., Hastie et al., 2009). The r-th row of the estimated matrix ˆL reveals the number of neighboring SNPs necessary for reliably predicting the state of the r-th SNP. Interestingly, we see some evidence of small block-like structures in ˆL, consistent with the hotspot model of recombination as previously described. This regression-based perspective to modeling LD may be a useful complement to the more standard approach, which focuses on raw marginal correlations. Finally, the sparsity recovered by the CSCS estimator, which uses the ℓ1 penalty, is less easily interpretable, since some entries far from the diagonal are non-zero, losing the notion of local . 6.2 An Application to Phoneme Classification In this section, we develop an application of our method to a classification problem described in Hastie et al. (2009). The data contain n = 1717 continuous speech recordings, which are categorized into two vowel sounds: aa (n1 = 695) and ao (n2 = 1022). Each observation (xi, yi) has a predictor xi Rp representing the (log) intensity of the sound across p = 256 frequencies and a class label yi {1, 1}. It may be reasonable to apply our method in this problem since the features are frequencies, which come with a natural ordering In linear discriminant analysis (LDA), one models the features as multivariate Gaussian conditional on the class: xi|yi = k Np(µ(k), Σ) for k {1, 1}; in quadratic discriminant analysis (QDA), one allows each class to have its own covariance matrix: xi|yi = k Np(µ(k), Σ(k)). The LDA/QDA classification rules assign an observation x Rp to class k that maximizes ˆP(y = k|x) ˆP(x|y = k) ˆP(y = k), where the estimated probability Yu and Bien Weighted L Unweighted L CSCS L Weighted Omega Unweighted Omega CSCS Omega Figure 10: Estimates of linkage disequilibrium with tuning parameters selected by the onestandard-error rule and their corresponding precision matrix estimates. Learning Local Dependence In Ordered Data Unweighted Weighted Nested Lasso Non-adaptive CSCS LDA 0.271 0.246 0.250 0.268 0.245 QDA 0.232 0.256 0.221 0.246 0.267 Table 1: Average test data classification error rate of discriminant analysis of phoneme data ˆP(x|y = k) is calculated using maximum likelihood estimates ˆµ(k), ˆΣ, and ˆΣ(k). More precisely, in the ordered case, the resulting class k maximizes the LDA/QDA scores: δ(k) LDA(x) = x T ˆΩˆµ(k) 1 2(ˆµ(k))T ˆΩˆµ(k) + log ˆπ(k) = (ˆLx)T ˆLˆµ(k) 1 2 + log ˆπ(k) (26) δ(k) QDA(x) = x T ˆΩ(k)ˆµ(k) 1 2(ˆµ(k))T ˆΩ(k)ˆµ(k) + log ˆπ(k) = (ˆL(k)x)T ˆL(k)ˆµ(k) 1 ˆL(k)ˆµ(k) 2 2 + log ˆπ(k). (27) Note that it is the precision matrix, not the covariance matrix, that is used in the above scores. In the setting where p > n, the MLE of Ωor Ω(k) does not exist. A regularized estimate of precision matrix that exploits the natural ordering information can be helpful in this setting. To demonstrate the use of our estimator in the high-dimensional setting, we randomly split the data into two parts, with 10% of the data assigned to the training set and the remaining 90% of the data assigned to the test set. On the training set, we use 5-fold cross-validation to select the tuning parameter minimizing misclassification error on the validation data. The estimates ˆL and ˆL(k) are then plugged into (26) and (27) along with ˆµ(k) = P i class k xi/n(k) and ˆπ(k) = n(k)/ntrain to calculate the misclassification error in the test set. For comparison, we also include non-adaptive banding, the nested lasso, and CSCS. We compute the classification error (summarized in Table 1), averaged over 10 random train-test splits. We first observe that, in general, the adaptive methods perform better than the nonadaptive one (which assumes a fixed bandwidth). It is again found that the performance of the weighted estimator is very similar to the one using ℓ1 penalty (i.e., the CSCS method). And our results are comparable to the nested lasso both in LDA and QDA. Interestingly, we find that the weighted estimator does better in LDA while the unweighted estimator performs better in QDA. The reason, we suspect, is that QDA requires the estimation of more parameters than LDA and therefore favors more constrained methods like the unweighted estimator, which more strongly discourages non-zeros from being far from the diagonal than the weighted one. An R (R Core Team, 2016) package, named varband, is available on CRAN, implementing our estimator. The estimation is very fast with core functions coded in C++, allowing us to solve large-scale problems in substantially less time than is possible with the R-based implementation of the nested lasso. Yu and Bien 7. Conclusion We have presented a new flexible method for learning local dependence in the setting where the elements of a random vector have a known ordering. The model amounts to sparse estimation of the inverse of the Cholesky factor of the covariance matrix with variable bandwidth. Our method is based on a convex formulation that allows it to simultaneously yield a flexible adaptively-banded sparsity pattern, enjoy efficient computational algorithms, and be studied theoretically. To our knowledge, no previous method has all these properties. We show how the matrix estimation problem can be decomposed into independent row estimation problems, each of which can be solved via an ADMM algorithm having efficient updates. We prove that our method recovers the signed support of the true Cholesky factor and attains estimation consistency rates in several matrix norms under assumptions as mild as those in linear regression problems. Simulation studies show that our method compares favorably to two pre-existing estimators in the ordered setting, both in terms of support recovery and in terms of estimation accuracy. Through a genetic data example, we illustrate how our method may be applied to model the local dependence of genetic variations in genes along a chromosome. Finally, we illustrate that our method has favorable performance in a sound recording classification problem. Acknowledgments We thank Kshitij Khare for a useful discussion in which he pointed us to the parametrization in terms of L. We thank Adam Rothman for providing R code for the non-adaptive banding and the nested lasso methods and Amy Williams for useful discussions about linkage disequilibrium. We also thank three referees and an action editor for helpful comments on an earlier manuscript. This work was supported by NSF DMS-1405746. Learning Local Dependence In Ordered Data Appendix A. Decoupling Property n XT X Rp p be the sample covariance matrix. Then the estimator (5) is the solution to the following minimization problem: min L:Lrr>0 Lrk=0 for r0 2 log L11 + 1 n X1L11 2 2 and for r = 2, . . . , p, ˆLT 1:r,r = arg min β Rr:βr>0 2 log βr + 1 n X1:rβ 2 2 + λ m=1 w2 ℓmβ2m Appendix B. A Closed-Form Solution to (9) The objective function in (9) is a smooth function. Taking the derivative with respect to β and setting to zero gives the following system of equations: n XT 1:r X1:rβ + u(t 1) + ρ β γ(t 1) = 0. Letting S(r) = 1 n XT 1:r X1:r, then the equations above can be further decomposed into βr + 2S(r) rr + ρ βr + 2S(r) r, rβ r + u(t 1) r ργ(t 1) r = 0, 2S(r) r, r + ρI β r + 2S(r) r,rβr + u(t 1) r ργ(t 1) r = 0. Yu and Bien Solving for β r in the second system of equations gives β r = 2S(r) r, r + ρI 1 2S(r) r,rβr + u(t 1) r ργ(t 1) r , which is then plugged back in the first equation to give βr + Aβr + B = 0, A = 4S(r) r, r 2S(r) r, r + ρI 1 S(r) r,r 2S(r) r,r ρ, B = 2S(r) r, r 2S(r) r, r + ρI 1 u(t 1) r ργ(t 1) r u(t 1) r + ργ(t 1) r . Solving for βr gives the closed-form update. Appendix C. Dual Problem of (10) Lemma 7 A dual problem of (10) is min a(ℓ) Rr ℓ=1 W (ℓ) a(ℓ) 2 s.t. a(ℓ) where y(t) = β(t) + 1 ρu(t 1). Also, given a solution ˆa(1), . . . , ˆa(r 1), the solution to (10) can be written as γ(t) = y(t) λ ℓ=1 W (ℓ) ˆa(ℓ). (29) Proof Note that v u u t m=1 w2 ℓmγ2m = W (ℓ) γ = max D W (ℓ) a(ℓ), γ E , s.t. a(ℓ) gc r,ℓ = 0 . Thus, the minimization problem in (10) becomes D W (ℓ) a(ℓ), γ E , a(ℓ) D W (ℓ) a(ℓ), γ E , a(ℓ) Learning Local Dependence In Ordered Data where y(t) = β(t) + 1 ρu(t 1). We solve the inner minimization problem by setting the derivative to zero, ℓ=1 W (ℓ) a(ℓ) = 0, which gives the primal-dual relation, ℓ=1 W (ℓ) a(ℓ) + y(t). Using this gives ℓ=1 W (ℓ) a(ℓ) W (ℓ) a(ℓ), λ ℓ=1 W (ℓ) a(ℓ) + y(t) + ℓ=1 W (ℓ) a(ℓ) 2 s.t. a(ℓ) Algorithm 3 BCD on the dual problem (28) 1: Let y(t) = β(t) + 1 2: Initialize ˆa(ℓ) 0 for all ℓ= 1, , r 1 3: for ℓ= 1, , r 1 do 4: ˆz(ℓ) y(t) λ ρ Pr 1 k=1 W (k) ˆa(k) Find a root ˆνℓthat satisfies w2 ℓm w2 ℓm + ν 2 ˆz(ℓ) m 2 = λ2 5: for m = 1, , ℓdo 6: ˆa(ℓ) m wℓm λ ρ(w2 ℓm+[ˆνℓ]+) ˆz(ℓ) m 7: return ˆa(ℓ) as a solution to (28) 8: return γ(t) = y(t) λ ρ Pr 1 ℓ=1 W (ℓ) ˆa(ℓ) as a solution to (10) Yu and Bien Appendix D. Elliptical Projection We adapt the same procedure as in Appendix B of Bien et al. (2016) to update one a(ℓ) in Algorithm (3). By (10) we need to solve a problem of the form ˆz(ℓ) τDa 2 2 s.t. a 2 1, where τ = λ ρ and D = diag(wℓm)m ℓ Rℓ ℓ. If D 1ˆz(ℓ) 2 τ, then clearly ˆa = 1 τ D 1ˆz(ℓ). Otherwise, we use the Lagrangian multiplier method to solve the constrained minimization problem above. Specifically, we find a stationary point of L (a, ν) = ˆz(ℓ) τDa 2 2 + ντ 2 a 2 2 1 . Taking the derivative with respect to a and set it equal to zero, we have ˆam = wℓm τ(w2 ℓm + ˆν) ˆz(ℓ) m , for each m ℓ, and ˆν is such that ˆa 2 = 1, which means it satisfies (30). By observing that hℓ(ν) is a decreasing function of ν and wℓℓ= maxm ℓwℓm, following Appendix B of Bien et al. (2016), we obtain lower and upper bounds for ˆν: 1 Dˆz(ℓ) 2 w2 ℓℓ which can be used as an initial interval for finding ˆν using Newton s method. In practice, we usually find ˆν from the equation 1 h(ν) = τ 2 for better numerical stability. We end this section with a characterization of the solution to (10), which says that the solution can be written as γ(t) = y(t) ˆt, where ˆt is some data-dependent vector in Rr. Theorem 8 A solution to (10) can be written as γ(t) = y(t) ˆg, where the data-dependent vector ˆg Rr is given by [ˆνℓ]+ w2 ℓm + [ˆνℓ]+ and ˆgr = 1, where ˆνℓsatisfies τ 2 = Pℓ m=1 w2 ℓm (w2 ℓm+ν) 2 ˆz(ℓ) m 2 . Proof By Jenatton et al. (2011), we can get a solution to (10) in a single pass as described in Algorithm 3. If we start from ˆz(1) = y(t), then for ℓ= 1, , r 1 and each m ℓ, ˆz(ℓ+1) m = ˆz(ℓ) m τwℓmˆa(ℓ) m = [ˆνℓ]+ w2 ℓm + [ˆνℓ]+ ˆz(ℓ) m . By (29), γ(t) = ˆz(r 1), and the result follows. A key observation from this characterization is that a banded sparsity pattern is induced in solving (10), which in turn implies the same property of the output of Algorithm 1. Corollary 9 A solution γ(t) to (10) has banded sparsity, i.e., γ(t) 1: ˆJ = 0 for ˆJ = max {ℓ: ˆνℓ 0}. Learning Local Dependence In Ordered Data Appendix E. Uniqueness of the Sparse Row Estimator Lemma 10 (Optimality condition) For any λ > 0 and a n-by-p sample matrix X, ˆβ is a solution to the problem 2 log βr + 1 n X1:rβ 2 2 + λ m=1 w2 ℓmβ2m if and only if there exist ˆa(ℓ) Rr for ℓ= 1, . . . , r 1 such that n XT 1:r X1:r ˆβ + λ ℓ=1 W (ℓ) ˆa(ℓ) = 0 (31) gc r,ℓ= 0, ˆa(ℓ) gr,ℓ= (W (ℓ) ˆβ)gr,ℓ (W (ℓ) ˆβ)gr,ℓ for ˆβgr,ℓ = 0 and ˆa(ℓ) 2 1 for ˆβgr,ℓ= 0. Lemma 11 Take ˆβ and ˆa(ℓ) as in the previous lemma. Suppose that ˆa(ℓ) 2 < 1 for ℓ= 1, . . . , J(ˆβ) then for any other solution β to (8), it is as sparse as ˆβ if not more. In other words, Lemma 12 (Uniqueness) Under the conditions of the previous lemma, let ˆS = n i : ˆβi = 0 o . If X ˆS has full column rank (i.e., rank X ˆS = | ˆS|) then ˆβ is unique. Proof See Appendices J, K, and L. Appendix F. Proof of Theorem 1 We start with introducing notation. From now on we suppress the dependence on r in notation for simplicity. We denote the group structure gℓ= {1, , ℓ} for ℓ r for each r = 1, . . . , p. For any vector β Rr, we let βgℓ Rℓbe the vector with elements {βm : m ℓ}. We also introduce the weight vector W (ℓ) Rp with W (ℓ) m = wℓm where wℓm can be defined as in (7) or wℓm = 1. Finally recalling from Section 4 the definition of I, we denote S = I {r} = {J + 1, . . . , r} and Sc = {1, 2, . . . , J}. The general idea of the proof depends on the primal-dual witness procedure in Wainwright (2009) and Ravikumar et al. (2011). Considering the original problem (8) for any r = 2, . . . , p, we construct the primal-dual witness solution pairs β, Pr 1 ℓ=1 W (ℓ) a(ℓ) as follows: Yu and Bien (a) Solve the restricted subproblem with the true bandwidth K = r 1 J: β = arg min βr>0 βSc=0 2 log βr + 1 n X1:rβ 2 2 + λ The solution above can be written as γ = arg min γ RK+1 2 log γK+1 + 1 n XSγ 2 2 + λ W (ℓ) = W (ℓ+J) m=J+1 w2 ℓmγ2 m J. (b) By Lemma 10, there exist b(ℓ) RK+1 for ℓ= 1, . . . , K, such that b(ℓ) gc ℓ = 0 and 2 γK+1 e K+1 + 2 n XT SXS γ + λ ℓ=1 W (ℓ) b(ℓ) = 0. (c) For ℓ= J + 1, . . . , r 1, we let a(ℓ) = 0J b(ℓ J) Then we have a(ℓ) gc ℓ= 0, a(ℓ) gℓ= (W (ℓ) β)gℓ (W (ℓ) β)gℓ for βgℓ = 0. (d) For each ℓ= 1, ..., J, we choose a(ℓ) Rr satisfying a(ℓ) ℓ = 0 for any ℓ = ℓ and a(ℓ) nλXT ℓXS βS. By construction and the fact that wℓℓ= 1, λ W (ℓ) a(ℓ) ℓ= λwℓℓ a(ℓ) Learning Local Dependence In Ordered Data By Lemma 10, a(ℓ) satisfies the optimality condition (31): n XT 1:r X1:r β + λ ℓ=1 W (ℓ) a(ℓ) = 0 (32) (e) Verify the strict dual feasibility condition for ℓ= 1, ..., J 2 nλXT ℓXS βS 2 < 1. (33) At a high level, steps (a) through (d) construct a pair β, a(ℓ) that satisfies the optimality condition (31), but the a(ℓ) is not necessarily guaranteed to be a member of P( β) . Step (e) does more than verifying the necessary conditions for it to belong to P( β) . The strict dual feasibility condition, once verified, ensures the uniqueness of the solution. Note that by construction in Step (b), a(ℓ) satisfies dual feasibility conditions for ℓ= J +1, ..., r 1 since n b(ℓ)o does, so it remains to verify for ℓ= 1, ..., J (see Step (c)). For each ℓ= 1, ..., J, by the construction in Step (d), a(ℓ) gc ℓ= 0. Note that βg J = 0 implies βgℓ= 0. Thus, for a(ℓ) to satisfy conditions in Lemma 10, it suffices to show (33). If the primal-dual witness procedure succeeds, then by construction, the solution β, whose support is contained in the support of the true Lr , is a solution to (8). Moreover, by strict dual feasibility and Lemma 12, we know that β is the unique solution ˆβ to the unconstrained problem (8). Therefore, the support of ˆβ is contained in the support of Lr . In the following we adapt the same proof technique as Wainwright (2009) to show that the primal-dual witness succeeds with high probability, from which we first conclude that K(ˆβ) K. F.1 Proof of Property 1 in Theorem 1 Proof We need to verify the strict dual feasibility (33). By (32), n XT r Xr βr + 2 n XT r XI βI = 0, (34) 2 n XT I Xr βr + 2 n XT I XI βI + λ ℓ=1 W (ℓ) a(ℓ) ! I = 0. (35) βI = XT I XI 1 " XT I Xr βr + λn ℓ=1 W (ℓ) a(ℓ) ! Plugging (36) back into (34) and denoting CI = XI XT I XI 1 Pr 1 ℓ=1 W (ℓ) a(ℓ) I and OI = I XI XT I XI 1 XT I as the orthogonal projection matrix onto the orthogonal Yu and Bien complement of the column space of XI, we have n XT r OIXr βr λXT r CI = 0, which implies that λ 2XT r CI + q 4 (XTr CI)2 + 4 2 n XTr OIXr (37) and that a(ℓ) nλXT ℓXS βS = 2 nλXT ℓXr βr 2 nλXT ℓXI βI nλXT ℓXr βr + 2 nλXT ℓXI XT I XI 1 " XT I Xr βr + λn ℓ=1 W (ℓ) a(ℓ) ! nλXT ℓ h I XI XT I XI 1 XT I i Xr βr + XT ℓXI XT I XI 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! Conditioning on XI, we can decompose Xr and Xℓas XT r = Σr I (ΣII) 1 XT I + ET r , (39) XT ℓ= ΣℓI (ΣII) 1 XT I + ET ℓ, where Er N 0n, θ(r) r In n and Eℓ N 0n, θ(ℓ) r In n , and θ(ℓ) r and θ(r) r are defined in Section 4. Then XT ℓOI = ET ℓOI and OIXr = OIEr, and from (38) + ΣℓI (ΣII) 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! I := R(ℓ) + F (ℓ). (40) We first bound maxℓ F (ℓ) . Note that ℓ=1 W (ℓ) a(ℓ) ! ℓ=J+1 W (ℓ) a(ℓ) ! ℓ=m wℓm a(ℓ) ℓ=m wℓm a(ℓ) 1 (ℓ m + 1)2 Learning Local Dependence In Ordered Data where we used a(ℓ) a(ℓ) 2 1. Therefore, by Assumption A3, ΣℓI (ΣII) 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! To give a bound on the random quantity R(ℓ) , we first state a general result that will be used multiple times later in the proof. Lemma 13 Consider the term ET j η where η Rn is a random vector depending on XI and Xr and Ej N 0n, θ(j) r In n for j = 1, . . . , J, r. If for some Q 0 P h Var ET j η XI, Xr Q i p then for any a > 0, P ET j η a 2 exp a2 Proof Define the event B = n Var ET j η XI Q o . Now for any a and conditioned on XI and Xr, P ET j η a P h ET j η a Bci + P B P h ET j η a Bci + p. Conditioned on Bc, the variance of ET j η is at most Q. So by standard Gaussian tail bounds, we have P h ET j η a Bci = E h P ET j η a XI, Xr Bci E 2 exp a2 Bc 2 exp a2 Then note that Var (Eiℓ) = θ(ℓ) r θr for i = 1, . . . , n. Now conditioned on both XI and Xr, R(ℓ) is zero-mean with variance at most Var R(ℓ) XI CT I CI + OI ℓ=1 W (ℓ) a(ℓ) !T ℓ=1 W (ℓ) a(ℓ) ! I + 4 β2 r OIEr 2 2 n2λ2 where the first equality holds from Pythagorean identity. The next lemma bounds the random scaling Mn. Yu and Bien Lemma 14 For ε 0, 1 Mn (ε) := 3κ2π2 θ(r) r (n K) (1 ε) + 16 P h Mn Mn (ε) XI i 7 exp 3θ(r) r κ2π2K , ε2 Proof See Appendix M. Now by Lemma 13 and the union bound, P max 1 ℓ J R(ℓ) α 2J exp α2 + 7 exp ( c3n) , (42) for some constant c3 independent of n and J. By the assumption that K n = o(1), we have that K n 1 ε for n large enough, thus Kθ(r) r (1 ε)2 + 16 Kλ2 Kθ(r) r + 16 Kλ2 For the exponential term in (42) to have faster decaying rate than the J term, we need n K log J > θr Kθ(r) r + 32 Kλ2 F.2 Proof of Property 2 in Theorem 1 Next we study the ℓ error bound. The following theorem gives an ℓ error bound of β. Learning Local Dependence In Ordered Data Proof Let δ = β β = β LT 1:r,r and W = SLT (L) 1, then from (35) and the fact that L 1 is lower-triangular, δI = XT I XI 1 h XT I Xr βr + XT I XI (L)T I,r i nλ 2 XT I XI 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! n XT I Xr (δr + β r) + 1 ℓ=1 W (ℓ) a(ℓ) ! = XT I XI 1 XT I Xrδr 1 ℓ=1 W (ℓ) a(ℓ) ! = XT I XI 1 XT I Xrδr 1 ℓ=1 W (ℓ) a(ℓ) ! From (34) and the fact that L 1 rr = 1 Lrr , n XT r Xrδr + 1 n XT r XIδI + 1 n XT r Xrβ r + 1 n XT r XIβ I n XT r Xrδr + 1 n XT r XIδI + SLT n XT r Xrδr + 1 n XT r XIδI + Wrr = δr Lrr βr + 1 n XT r Xrδr + 1 n XT r XIδI + Wrr = 0. (44) Plugging (43) into (44), we have δr Lrr βr + 1 n XT r OIXrδr = XT r XI XT I XI 1 WI,r + λ 2 Xr CI Wrr, which implies δr = 1 Lrr βr + 1 n XT r OIXr 1 XT r XI XT I XI 1 WI,r + λ 2 Xr CI Wrr Since Lrr > 0 and βr > 0, 1 Lrr βr + 1 n XT r OIXr XT r XI XT I XI 1 WI,r + λ 2 Xr CI n XT r OIXr XT r XI XT I XI 1 WI,r + λ 2 Xr CI Yu and Bien Now conditioned on XI, by the decomposition (39), 1 n XT r OIXr 1 = 1 n ET r OIEr 1 = n OIEr 2 2 . From Lemma 20, it follows that n XT r OIXr n n K 1 1 ε 4 (n K) ε2 . Also, by Lemma 19, P XT r CI 1 2 exp 3θ(r) r κ2π2K To deal with the rest of terms in (44) that involve W, we introduce the following concentration inequality to control its element-wise infinity norm. Lemma 15 Let W = SLT L 1. Under Assumptions A4 and A5, there exist constants C1, C2, C3 > 0 such that for any 0 < t 2κ, P [ W > t] 2p2 exp C3nt2 + 4p exp C1nt + 4p exp ( C2nt) . Proof See Appendix N. In terms of the event A = { W λ} , Lemma 15 states that P [Ac] 2p2 exp C3nλ2 + 4p exp C1nλ + 4p exp ( C2nλ) . The next lemma shows that, on the event A and with the assumption that λ2 the term XT r XI XT I XI 1 WI,r can be bounded by λ with high probability. Lemma 16 Using the general weigthing scheme (7), we have P h XT r XI XT I XI 1 WI,r λ A i 2 exp 9θ(r) r κ2Kλ2 Proof Recall that by conditioning on XI, the decomposition (39) gives XT r XI XT I XI 1 WI,r = Σr I (ΣII) 1 WI,r + ET r XI XT I XI 1 WI,r. On the event A, by A3 and (41), Σr I (ΣII) 1 WI,r Σr I (ΣII) 1 WI,r λ. Learning Local Dependence In Ordered Data Note that Var (Eir) = θ(r) r for i = 1, . . . , n. Let B(r) := ET r XI XT I XI 1 WI,r, then B(r) has mean zero and variance at most Var B(r) XI = θ(r) r n WT I,r 1 WI,r 9θ(r) r κ2Kλ2 with probability greater than 1 2 exp n 2 . The result follows from Lemma 13. Putting everything together and choosing the tuning parameter from (13), with a union bound argument and some algebra, we have shown that conditioned on XI, n n K 1 1 ε 5 2λ 2θ(r) r λ A ε2 + 2 exp nα2 + 2 exp 2nα2 + 2p2 exp C3nλ2 + 4p exp C1nλ + 4p exp ( C2nλ) c4 exp ( c5n) + c6 for some constants c4, c5, c6, x > 0 that do not depend on n and p. We now consider a bound for δI. Recall from (43) that δI = F1 + F2 F1 = XT I XI 1 XT I Xrδr, 2 D with D = ℓ=1 W (ℓ) a(ℓ) ! An ℓ bound of F2 is given by 1 (ΣII) 1 ! WI,r + λ 2 D + (ΣII) 1 WI,r + λ On the event A, by (41), (ΣII) 1 WI,r + λ 2 D (ΣII) 1 λ 2λ (ΣII) 1/2 2 Yu and Bien To deal with the first term in (46), note that XI = WI (ΣII)1/2, where WI Rn K is a standard Gaussian random matrix, i.e., (WI)ij N(0, 1). Thus we can write it as (ΣII) 1/2 " 1 (ΣII) 1/2 WI,r + λ 2 D (ΣII) 1/2 G, (ΣII) 1/2 WI,r + λ By Lemma 5 in Wainwright (2009), we have, for some constant c7 > 0. P G (ΣII) 1/2 WI,r + λ 4 exp ( c7 min {K, log J}) Note that conditioning on A, (ΣII) 1/2 WI,r + λ 2D is upper bounded by 2λ (ΣII) 1/2 . Thus, P G 2λ (ΣII) 1/2 2 A 4 exp ( c7 min {K, log J}) , P F2 4λ (ΣII) 1/2 2 P F2 4λ (ΣII) 1/2 2 4 exp ( c7 min {K, log J}) + c6 Turning to F1, conditioned on XI, by decomposition (39), we have that F1 (ΣII) 1 ΣIr |δr| + XT I XI 1 XT I Erδr . By (45) and A3, " (ΣII) 1 ΣIr |δr| 5 c4 exp ( c5n) + c6 Consider each coordinate j I of the random term whose variance is bounded by Var h e T j XT I XI 1 XT I Erδr XI i θr By Lemma 18 and (45), P Var h e T j XT I XI 1 XT I Erδr XI i 235 + c4 exp ( c5n) + c6 Learning Local Dependence In Ordered Data Thus by Lemma 13, " XT I XI 1 XT I Erδr 5 2 exp n 18θrκ2 +c4 exp ( c5n) + c6 2 exp n 18θrκ2 + c4 exp ( c5n) + c6 Combining with (45) and (47), we have δ 4λ (ΣII) 1/2 2 c8 exp ( c9n) + 2c6 p + 4 exp ( c7 min {K, log J}) , for some constants c8, c9 > 0 that do not depend on n and J. F.3 Proof of Property 3 in Theorem 1 Finally we establish a βmin condition, which, combined with the ℓ rate, gives the other direction of the support recovery, i.e., K(ˆβ) K. By the triangle inequality βj |βj| βj βj . So if we have n |βj| βj βj o > 0, then K( β) K. Appendix G. Proof of Theorem 3 Proof The overall proof techniques are the same as the proof of Theorem 1. The first part of the theorem holds if max2 r p max1 ℓ Jr | a(rℓ)| < 1. Now for each r = 2, . . . , p we proceed with the same primal-dual witness procedure and end up with the same decomposition (40). Assumption A3 ensures that max2 r p max1 ℓ Jr |F (rℓ)| 1 α. Following the same line of proof to deal with random term R(rℓ), we have that R(rℓ) is zero-mean Gaussian with conditional variance bounded above by the scaling θr M(r) n (ε) = 3κ2π2θr 2 K r n + θr 1 (n K r ) (1 ε) + 16θr nθ(r) r (1 ε)2 + 16 Yu and Bien 2 with high probability, where we use the fact that K = o(n) implies that K n ε for n large. And P h R(rℓ) α i 2 exp 2θr M(r) n (ε) + 7 exp ( c3n) . P max 2 r p max 1 ℓ Jr 2θr M(r) n (ε) r=2 Jr exp ( c3n) 2p2 exp ( c3n) . For the exponential term to decay faster than p2, we need n log p > max 2 3κ2π2θK + 8κ2θ + 32θ Appendix H. Proof of Theorem 4 Lemma 17 Using the notation and conditions in Theorem 4, the following deviation bounds hold with high probability: ˆL L ζΓ (K + 1) ˆL L 1 ζΓ (K + 1) ˆL L 2 ζΓ (K + 1) (s + p) log p Proof By Theorem 3, with high probability, the support of ˆL is contained in the true support and ˆL L = max 2 r p ˆLrc Lrc max 2 r p (Kr + 1) ˆL L (K + 1) ˆL L . Learning Local Dependence In Ordered Data Denote D = max1 c p 1 Dc where Dc = |{r = c, . . . , p : Lrc = 0}|. Observing that D K, we have ˆL L 1 = max 1 c p 1 ˆLrc Lrc max 1 c p 1 (Dc + 1) ˆL L (D + 1) ˆL L (K + 1) ˆL L . By H older s inequality ˆL L 2 r ˆL L 1 Finally for Frobenius norm, Proof [of Theorem 4] First note that ˆLT ˆL LT L = ˆL L T ˆL L + ˆLT L + LT ˆL 2LT L = ˆL L T ˆL L + ˆL L T L + LT ˆL L . Thus, ˆLT ˆL LT L ˆL L ˆL L + 2|||L||| ˆL L , ˆLT ˆL LT L 1 = ˆLT ˆL LT L 2|||L||| ˆL L + ˆL L 2 By H older s inequality ˆLT ˆL LT L 2 r ˆLT ˆL LT L 1 ˆLT ˆL LT L . Finally, for Frobenius norm, observe that LT ˆL L F = vec LT ˆL L 2 = Ip LT vec ˆL L 2 ˆL L F = |||L|||2 ˆL L F . Applying the same strategy to ˆL L ˆL L F , we have ˆLT ˆL LT L F ˆL L 2 + 2|||L|||2 ˆL L F , then the results follow from Corollary 17. Yu and Bien Appendix I. Proof of Theorem 6 Proof We adapt the proof technique of Rothman et al. (2008). Let G( ) = 2 log det (L + ) + tr S (L + )T (L + ) + λ ( + L) 2,1 + 2 log det L tr SLT L λ L 2,1 , (48) where L is the inverse of the Cholesky factor of the true covariance matrix, and the penalty is defined above as m=1 w2 ℓm L2rm. Since the estimator ˆL is defined as ˆL = arg min Ljk=0:j j, ( + L)jj > 0 for all j , F = Mrn o , where M > 0 and (Pp r=2 Kr + p) log p The assumed scaling implies that rn 0. We aim at showing that inf {G( ) : Θn(M)} > 0. If it holds, then the convexity of G ( ) and the fact that G( ˆ ) G(0) = 0 implies ˆ F = ˆL L F Mrn. We start with analyzing the logarithm terms in (48). First let f(t) = log det(L + t ). Using a Taylor expansion of f(t) at t = 0 with f (t) = tr[(L + t ) 1 ] and f (t) = vec T (L + t ) 1 (L + t ) 1 vec , we have log det(L + ) log det(L) = tr(L 1 ) (vec )T Z 1 0 (1 ν)(L + ν ) 1 (L + ν ) 1dν (vec ). The trace term in (48) can be written as tr S (L + )T (L + ) tr SLT L = tr SLT + S T L + S T = 2 tr SLT + tr S T Learning Local Dependence In Ordered Data where the last inequality comes from the fact that the sample covariance matrix S is positive semidefinite. Combining these with (48) gives G( ) 2(vec )T Z 1 0 (1 ν)(L + ν ) 1 (L + ν ) 1dν (vec ) + 2 tr[(SLT L 1) ] + λ L + 2,1 L 2,1 (a) + (b) + (c). (49) The integral term (a) above has a positive lower bound. Recalling that σmin(M) = min x =1 x T Mx is a concave function of M (the minimum of linear functions of M is concave), we have (a) = 2 vec 2 vec T 0 (1 ν)(L + ν ) 1 (L + ν ) 1dν vec vec 0 (1 ν)(L + ν ) 1 (L + ν ) 1dν 0 (1 ν)σmin (L + ν ) 1 (L + ν ) 1 dν 0 (1 ν)σ2 min(L + ν ) 1dν 2 F min 0 ν 1 σ2 min(L + ν ) 1 2 F min n σ2 min(L + ) 1 : F Mrn o . (50) The second inequality uses Jenson s inequality of the concave function σmin( ), and the third inequality uses the fact that σmin (A A) = σmin (A)2 for any positive (semi)definite matrix A. Using triangle inequality on the matrix operator norm, we have σ2 min(L + ) 1 = σ 2 max(L + ) |||L|||2 + 2 2 1 2|||L|||2 2 κ2 where the second inequality holds with high probability since 2 F Mrn |||L|||2 as rn 0 and the last inequality follows from Assumption A4. This gives the lower bound for the first term in (49): 2κ2 2 F = 1 2κ2M2r2 n. (51) To deal with (b), we start by recalling some notation. We let S = {(r, j) : Lrj = 0} denote the support of L, and s = Pp r=2 Kr be the number of non-zero off-diagonal elements. We also define ℓ=1 wℓℓ|Lrℓ| = Yu and Bien where the last equality holds since wℓℓ= 1 by (7). Then, by the Cauchy-Schwarz inequality, tr[(SLT L 1) ] = j Ir (SLT L 1)rj rj j / Ir (SLT L 1)rj rj s + p SLT L 1 S F + SLT L 1 Sc 2,1 n Sc 2,1 , (52) where the last inequality comes from Lemma 15 with probability tending to 1. To bound the penalty terms, we note that L + 2,1 L 2,1 m=1 w2 ℓm(Lrm + rm)2 LS 2,1 m:(r,m) S w2 ℓm(Lrm + rm)2 + X m:(r,m)/ S w2 ℓm(Lrm + rm)2 LS 2,1 m:(r,m) S w2 ℓm(Lrm + rm)2 + ℓ:(r,ℓ)/ S |Lrℓ+ rℓ| LS 2,1 = LS + S 2,1 + LSc + Sc 2,1 LS 2,1 = LS + S 2,1 + Sc 2,1 LS 2,1 Sc 2,1 S 2,1 , where the last inequality comes from triangle inequality. To give an upper bound on LS 2,1, we observe that 2λb aλ2 + b2/a holds for any a > 0, and obtain m=Jr+1 w2 ℓm 2rm m=Jr+1 w2 ℓm 2 rm/a Learning Local Dependence In Ordered Data κ2 max r max Jr+1 m r 1 κ2 max r max Jr+1 m r 1 1 (ℓ m + 1)4 for some constant C2 > 0, it follows that κ2 sλ2 + S 2 F κ2 κ2 sλ2 + 2 F κ2 λ L + 2,1 L 2,1 λ Sc 2,1 C2 4 2 F . (53) Finally, combining (51), (52), and (53), we have (s + p) log p For any ε < 1, choose Since F = Mrn, we have 4 M2r2 n C1Mr2 n + C1 ε 1 Sc 2,1 C2C2 1 κ2ε2 s log p 4 M2 C1M C2C2 1 κ2ε2 for M sufficiently large. Appendix J. Proof of Lemma 10 Proof Denote L τ, z, β; ν, φ, a(ℓ) = 2 log τ + 1 n z 2 2 + ν (τ βr) + 1 n φ, z X1:rβ + λ D W (ℓ) a(ℓ), β E . Yu and Bien Then the primal (8) can be written equivalently as max ν,φ,a(ℓ) L τ, z, β; ν, φ, a(ℓ) : a(ℓ) gc r,ℓ = 0 . The dual function can then be written as g ν, φ, a(ℓ) = inf τ,z,β L τ, z, β; ν, φ, a(ℓ) = inf τ { 2 log τ + ντ} + inf z n z 2 2 + 1 n XT 1:rφ, β + λ D W (ℓ) a(ℓ), β E) =2 log ν 2 log 2 + 2 1 {ν > 0} 1 n XT 1:rφ + λ ℓ=1 W (ℓ) a(ℓ) = 0 where er Rr is such that (er)r = 1 and (er)j = 0 for all j = r. Thus the dual problem (up to a constant) is max ν,φ,a(ℓ)g ν, φ, a(ℓ) = min ν,φ,a(ℓ) 2 log ν + 1 4n φ 2 2 s.t. ν > 0, a(ℓ) gc r,ℓ = 0, n XT 1:rφ = λ ℓ=1 W (ℓ) a(ℓ) ) The primal-dual relation is ˆβr = ˆτ = 2 ˆν ˆφ = 2ˆz = 2X1:r ˆβ. This implies that at optimal points ˆβr er + 2S1:r,1:r ˆβ + λ ℓ=1 W (ℓ) ˆa(ℓ) = 0, If we denote the objective function as f (β) = 2 log βr + S1:r,1:r, ββT + λP(β), then from the equality f(ˆβ) = L ˆτ, ˆz, ˆβ; ˆν, ˆφ, ˆa(ℓ) together with the primal-dual relation, we have D W (ℓ) ˆa(ℓ), ˆβ E = D W (ℓ) ˆβ, ˆa(ℓ)E . Learning Local Dependence In Ordered Data Suppose there exists some ℓwith ˆβgr,ℓ = 0 but ˆa(ℓ) gr,ℓ = (W (ℓ) ˆβ)gr,ℓ (W (ℓ) ˆβ)gr,ℓ then D W (ℓ) ˆβ, ˆa(ℓ)E < W (ℓ) ˆβ 2 while for other ℓ by Cauchy-Schwarz inequality we have D W (ℓ ) ˆβ, ˆa(ℓ )E W (ℓ ) ˆβ 2 . Therefore, summing over all ℓ= 1, . . . , r 1 D W (ℓ) ˆβ, ˆa(ℓ)E , which leads to a contradiction. Thus ˆa(ℓ) gr,ℓ= (W (ℓ) ˆβ)gr,ℓ (W (ℓ) ˆβ)gr,ℓ for ˆβgr,ℓ = 0 and ˆa(ℓ) gr,ℓ 2 1 for ˆβgr,ℓ= 0. Appendix K. Proof of Lemma 11 Proof In this proof, we continue to use the notation in Appendix J. Observe that L τ, z, β; ν, φ, a(ℓ) is jointly convex in τ, z and β, and it is strictly convex in τ and z. Thus, the minimizers ˆz and ˆτ are unique. To see this in a more general setting, without loss of generality, suppose f(x, y) is convex in y and is strictly convex in x. Then for x1 = x2 and θ (0, 1) we have f (θx1 + (1 θ) x2, y) < θf (x1, y) + (1 θ) f (x2, y) Now suppose (ˆx1, ˆy) and (ˆx2, ˆy2) are both minima of f, then taking θ = 1/2 we have f ˆx1+ˆx2 2 , ˆy < f (ˆx1, ˆy) = f (ˆx2, ˆy), which leads to a contradiction. By the primal-dual relation, we know that if ˆβ and β are two solutions to (8), then ˆβr = βr and X1:r ˆβ = X1:r β. So from the equality f(ˆβ) = f( β) we know that P( β) = P(ˆβ). Also by f ˆβ = L ˆτ, ˆz, ˆβ; ˆν, ˆφ, ˆa(ℓ) L ˆτ, ˆz, β; ˆν, ˆφ, ˆa(ℓ) L τ, z, β; ν, φ, a(ℓ) = f β , L ˆτ, ˆz, ˆβ; ˆν, ˆφ, ˆa(ℓ) = L ˆτ, ˆz, β; ˆν, ˆφ, ˆa(ℓ) , D W (ℓ) ˆa(ℓ), β E = D W (ℓ) ˆa(ℓ), ˆβ E = P(ˆβ) = P( β) = Now for any ℓ r 1 suppose ˆa(ℓ) 2 < 1, then for the equality above to hold, we must have βgr,ℓ= 0. Therefore, by Lemma 10, ˆβgr,ℓ= 0 = βgr,ℓ= 0, so any other Yu and Bien solutions to (8) cannot be less sparse than ˆβ. Appendix L. Proof of Lemma 12 Proof By Lemma 11, any other solution β to (8) must have βg J( ˆβ) = 0. Recall that J(ˆβ) = r 1 K(ˆβ). The original problem (8) can thus be written equivalently as min γ RK( ˆβ)+1 2 log γK(ˆβ)+1 + 1 X ˆSγ 2 2 + λ where ˆW (ℓ) = W (ℓ+ ˆJ) ˆS. Note that the penalty term is a convex function of γ. The Hessian matrix of the first term is a diagonal matrix of dimension | ˆS| = K(ˆβ) + 1 with non-negative entries in the diagonal. The Hessian matrix of the second term is 2S ˆS ˆS. Then by Assumption A1, the uniqueness follows from strict convexity. Appendix M. Proof of Lemma 14 Proof Recall that ℓ=1 W (ℓ) a(ℓ) !T ℓ=1 W (ℓ) a(ℓ) ! I + 4 n2λ2 β2 r OIEr 2 2 . We cite Lemma 9 (specifically in the form (60)) in Wainwright (2009) here for completeness. Lemma 18 (Wainwright 2009) For k n, let XI Rn k have i.i.d. rows from a multivariate Gaussian distribution with mean 0 and covariance matrix Σ. If Σ has minimum eigenvalue κ > 0, then By the lemma above, Assumption A4, and (41) ℓ=1 W (ℓ) a(ℓ) !T ℓ=1 W (ℓ) a(ℓ) ! ℓ=1 W (ℓ) a(ℓ) ! with probability greater than 1 2 exp n Learning Local Dependence In Ordered Data Next we deal with the second term in Mn. Recall from (37) that 4 n2λ2 β2 r OIEr 2 2 = 4 1 2XT r CI + q 1 4 (XTr CI)2 + 4 λ2n OIEr 2 2 2 n OIEr 2 2 1 4 XT r CI 2 + 4 λ2n OIEr 2 2 1 n2 OIEr 4 2 OIEr 2 2 OIEr 2 2 + 16 The next lemma gives us a handle on the numerator of the first term. Lemma 19 Using the general weight (7), we have P XT r CI 1 2 exp nα2 Proof Conditioned on XI, from the decomposition (39) and the definition of CI XT r CI = Σr I (ΣII) 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! I + ET r XI XT I XI 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! By the irrepresentable assumption (A3) and (41), Σr I (ΣII) 1 r 1 X ℓ=1 W (ℓ) a(ℓ) ! Note that Var (Eir) = θ(r) r for i = 1, . . . , n. Let B(r) = ET r XI XT I XI 1 Pr 1 ℓ=1 W (ℓ) a(ℓ) By Lemma 18, B(r) has mean zero and variance at most Var B(r) XI = θ(r) r n ℓ=1 W (ℓ) a(ℓ) !T ℓ=1 W (ℓ) a(ℓ) ! I 3θ(r) r κ2π2K with probability greater than 1 2 exp n 2 . By Lemma 13, we have that P h B(r) α i 2 exp 3θ(r) r κ2π2K Since OIEr 2 2 θ(r) r χ2 (n K). To bound it, we cite a concentration inequality from Wainwright (2009) (specifically (54b)) as the following lemma: Yu and Bien Lemma 20 (Tail Bounds for χ2-variates, Wainwright 2009) For a centralized χ2-variate X with d degrees of freedom, for all ε (0, 1/2), we have P [X d(1 ε)] exp 1 From Lemma 20 it follows that P h OIEr 2 2 θ(r) r (n K) (1 ε) i exp 1 4 (n K) ε2 , which together with Lemma 19 implies that " XT r CI 2 θ(r) r (n K) (1 ε) 3θ(r) r κ2π2K 4 (n K) ε2 . The result follows from a union bound. Appendix N. Proof of Lemma 15 Proof The proof strategy is based on the proof of Lemma 2 in Bien et al. (2016). For the design matrix Xn p with independent rows, denote Xi = (Xi )T Rp. Then Xi are i.i.d with mean 0 and true covariance matrix Σ = LT L 1 for i = 1, ..., n. And X = 1 n Pn i=1 Xi has mean 0 and true covariance matrix 1 nΣ. Let Yi = LXi Rp. Then Yi are i.i.d with mean 0 and true covariance matrix LΣLT = L LT L 1 LT = Ip. And Y = 1 n Pn i=1 Yi = 1 n Pn i=1 LXi = L X has mean zero and covariance matrix 1 n Ip. Also the corresponding design matrix Y = XLT has independent rows. Xi X Xi X T LT Xi X LXi L X T = 1 Xi X Yi Y T . So we have SLT ij = n 1 p X k=1 Xki Ykj Xi Yj. Letting W = SLT L 1, we have that k=1 Xki Ykj L 1 Learning Local Dependence In Ordered Data P max ij |W|ij > t k=1 Xki Ykj L 1 + P max ij Xi Yj > t k=1 Xki Ykj L 1 2 for some i, j k=1 Xki Ykj L 1 p2 max ij P k=1 Xki Ykj L 1 + p max i P + p max j P :=p2 max ij Iij + p max i IX i + p max j IY j . Consider IX i first. Since Xki are independent sub-Gaussian with variance Σii for k = 1, .., n, we have k=1 E exp t Xki nΣii by independence k=1 exp C1t2/n = exp( C1t2) by the definition of sub-Gaussian, so Xi is sub-Gaussian with variance Σii/n. By Lemma 5.5 in Vershynin (2010), we have Σ ii > t exp 1 t2/K2 1 , where K1 is a constant that does not depend on i. Following the same argument we have E exp t Yi/ p k=1 E exp t Yki/ n exp C2t2 , thus P h Yi / p 1/n > t exp 1 t2/K2 2 , Yu and Bien where K2 is a constant that does not depend on i. And we have IX i + IY i = P h Xi > p t/2 i + P h Yi > p exp 1 nt 2K2 1Σ ii max i IX i + IY i 4 exp C1nt maxi Σ ii + 4 exp ( C2nt) for some constant C1. Now consider the term Iij. We have shown that both X and Y have independent rows. So for any i, j, Z(ij) k = Xki Ykj are independent for k = 1, . . . , n. Let X N (0, Σ) and Y N (0, Ip), then E (Xki Ykj) = Cov (X, LX)ij 0 = Cov (X, X) LT If there exist νij and cij such that k=1 E X2 ki Y 2 kj νij k=1 E n (Xki Ykj)q + o q! 2 νijcq 2 ij for some q 3 N, then by Theorem 2.10 (Corollary 2.11) in Boucheron et al. (2013), t > 0, we have Xki Ykj (L) 1 ij > t 2 (νij + cijt) The rest of the proof focuses on characterizing νij and cij. First, Lemma 5.5 in Vershynin (2010) shows that, for some constant K3 that does not depend on j, Σjj q 1/q K3 q holds for all q 1. Thus, E |Xij|q Kq 3qq/2 (Σjj)q/2 . Following the same argument, there exists some constant K4 that does not depend on j such that E |Yij|q Kq 4qq/2 for all q 1. Learning Local Dependence In Ordered Data k=1 E X2 ki Y 2 kj EX4 ki EY 4 kj n q K4 324K4 424Σii2 = 16n K2 3K2 4Σii, k=1 E n (Xki Ykj)q + o EX2q ki EY 2q kj n q K2q 3 (2q)2q K2q 4 (Σii)2 = n Kq 3Kq 4 (2q)q (Σii)q/2 . νij = K5nΣ ii, for some K5 large enough and does not depend on i, j. Now we have Iij 2 exp n2t2 4 (2νij + cijtn) 4 2K5Σ ii + K5 Σiit If t 2 maxi pΣ ii, then with C3 = (16K5) 1 we have Iij 2 exp C2nt2 To sum up, for any 0 < t 2 maxi pΣ ii, P max ij |Wij| > t 2p2 exp C2nt2 + 4p exp C1nt maxi Σ ii + 4p exp ( C2nt) . Bryon Aragam and Qing Zhou. Concave penalized estimation of sparse gaussian bayesian networks. Journal of Machine Learning Research, 16:2273 2328, 2015. Onureena Banerjee, Laurent El Ghaoui, and Alexandre d Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research, 9:485 516, 2008. Peter J Bickel and Elizaveta Levina. Regularized estimation of large covariance matrices. The Annals of Statistics, 36(1):199 227, 2008. Jacob Bien, Florentina Bunea, and Luo Xiao. Convex banding of the covariance matrix. Journal of the American Statistical Association, 111(514):834 845, 2016. Yu and Bien St ephane Boucheron, G abor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. OUP Oxford, 2013. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1 122, 2011. Tony Cai, Weidong Liu, and Xi Luo. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106(494): 594 607, 2011. International Hap Map 3 Consortium et al. Integrating common and rare genetic variation in diverse human populations. Nature, 467(7311):52 58, 2010. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432 441, 2008. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning; Data Mining, Inference and Prediction, Second Edition. Springer Verlag, New York, 2009. Jianhua Z Huang, Naiping Liu, Mohsen Pourahmadi, and Linxu Liu. Covariance matrix selection and estimation via penalised normal likelihood. Biometrika, 93(1):85 98, 2006. Rodolphe Jenatton, Jean-Yves Audibert, and Francis Bach. Structured variable selection with sparsity-inducing norms. The Journal of Machine Learning Research, 12:2777 2824, 2011. K. Khare, S. Oh, S. Rahman, and B. Rajaratnam. A convex framework for high-dimensional sparse Cholesky based covariance estimation. Ar Xiv e-prints, October 2016. Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(4):803 825, 2014. Elizaveta Levina, Adam Rothman, and Ji Zhu. Sparse estimation of large covariance matrices via a nested lasso penalty. The Annals of Applied Statistics, 2(1):245 263, 2008. Han Liu and Lie Wang. Tiger: A tuning-insensitive approach for optimally estimating gaussian graphical models. ar Xiv preprint ar Xiv:1209.2437, 2012. Weidong Liu and Xi Luo. High-dimensional sparse precision matrix estimation via sparse column inverse operator. ar Xiv preprint ar Xiv:1203.3896, 2012. Nicolai Meinshausen and Peter B uhlmann. High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, pages 1436 1462, 2006. Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735 746, 2009. Learning Local Dependence In Ordered Data Mohsen Pourahmadi. Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation. Biometrika, 86(3):677 690, 1999. Mohsen Pourahmadi. High-dimensional covariance estimation: with high-dimensional data. John Wiley & Sons, 2013. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2016. URL https://www.R-project.org/. Pradeep Ravikumar, Martin J Wainwright, Garvesh Raskutti, and Bin Yu. Highdimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935 980, 2011. Adam J Rothman, Peter J Bickel, Elizaveta Levina, and Ji Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494 515, 2008. Ali Shojaie and George Michailidis. Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs. Biometrika, 97(3):519 538, 2010. Nicolas St adler, Peter B uhlmann, and Sara van de Geer. l1-penalization for mixture regression models (with discussion). Test, 19:209 285, 2010. Tingni Sun and Cun-Hui Zhang. Comments on: l1-penalization for mixture regression models. Test, 19(2):270 275, 2010. Tingni Sun and Cun-Hui Zhang. Scaled sparse linear regression. Biometrika, 99(4):879 898, 2012. Tingni Sun and Cun-Hui Zhang. Sparse matrix inversion with scaled lasso. The Journal of Machine Learning Research, 14(1):3385 3418, 2013. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267 288, 1996. Sara A. Van de Geer and Peter B uhlmann. On the conditions used to prove oracle results for the lasso. Electronic Journal of Statistics, 3:1360 1392, 2009. Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. ar Xiv preprint ar Xiv:1011.3027, 2010. Martin J Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using-constrained quadratic programming (lasso). Information Theory, IEEE Transactions on, 55(5):2183 2202, 2009. Wei Biao Wu and Mohsen Pourahmadi. Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika, 90(4):831 844, 2003. Xiaohan Yan and Jacob Bien. Hierarchical sparse modeling: A choice of two regularizers. ar Xiv preprint ar Xiv:1512.01631, 2015. Yu and Bien Ming Yuan. High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research, 11:2261 2286, 2010. Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19 35, 2007. Teng Zhang and Hui Zou. Sparse precision matrix estimation via lasso penalized d-trace loss. Biometrika, 101(1):103 120, 2014. Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541 2563, 2006. Peng Zhao, Guilherme Rocha, and Bin Yu. The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics, 37(6A):3468 3497, 2009. Dale L Zimmerman and Vicente A Nunez-Anton. Antedependence models for longitudinal data. CRC Press, 2009. Hui Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418 1429, 2006.