# learning_graphical_models_with_hubs__0ccf4162.pdf Journal of Machine Learning Research 15 (2014) 3297-3331 Submitted 9/13; Revised 5/14; Published 10/14 Learning Graphical Models With Hubs Kean Ming Tan keanming@uw.edu Department of Biostatistics University of Washington Seattle WA, 98195 Palma London palondon@uw.edu Karthik Mohan karna@uw.edu Department of Electrical Engineering University of Washington Seattle WA, 98195 Su-In Lee suinlee@cs.washington.edu Department of Computer Science and Engineering, Genome Sciences University of Washington Seattle WA, 98195 Maryam Fazel mfazel@uw.edu Department of Electrical Engineering University of Washington Seattle WA, 98195 Daniela Witten dwitten@uw.edu Department of Biostatistics University of Washington Seattle, WA 98195 Editor: Xiaotong Shen We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set. Keywords: Gaussian graphical model, covariance graph, binary network, lasso, hub, alternating direction method of multipliers c 2014 Kean Ming Tan, Palma London, Karthik Mohan, Su-In Lee, Maryam Fazel, and Daniela Witten. Tan, London, Mohan, Lee, Fazel, and Witten 1. Introduction Graphical models are used to model a wide variety of systems, such as gene regulatory networks and social interaction networks. A graph consists of a set of p nodes, each representing a variable, and a set of edges between pairs of nodes. The presence of an edge between two nodes indicates a relationship between the two variables. In this manuscript, we consider two types of graphs: conditional independence graphs and marginal independence graphs. In a conditional independence graph, an edge connects a pair of variables if and only if they are conditionally dependent dependent conditional upon the other variables. In a marginal independence graph, two nodes are joined by an edge if and only if they are marginally dependent dependent without conditioning on the other variables. In recent years, many authors have studied the problem of learning a graphical model in the high-dimensional setting, in which the number of variables p is larger than the number of observations n. Let X be a n p matrix, with rows x1, . . . , xn. Throughout the rest of the text, we will focus on three specific types of graphical models: 1. A Gaussian graphical model, where x1, . . . , xn i.i.d. N(0, Σ). In this setting, (Σ 1)jj = 0 for some j = j if and only if the jth and j th variables are conditionally independent (Mardia et al., 1979); therefore, the sparsity pattern of Σ 1 determines the conditional independence graph. 2. A Gaussian covariance graph model, where x1, . . . , xn i.i.d. N(0, Σ). Then Σjj = 0 for some j = j if and only if the jth and j th variables are marginally independent. Therefore, the sparsity pattern of Σ determines the marginal independence graph. 3. A binary Ising graphical model, where x1, . . . , xn are i.i.d. with density function p(x, Θ) = 1 Z(Θ) exp j=1 θjjxj + X 1 j n, many authors have proposed applying an ℓ1 penalty to the parameter encoding each edge, in order to encourage sparsity. For instance, such an approach is taken by Yuan and Lin (2007a), Friedman et al. (2007), Rothman et al. (2008), and Yuan (2008) in the Gaussian graphical model; El Karoui (2008), Bickel and Levina (2008), Rothman et al. (2009), Bien and Tibshirani (2011), Cai and Liu (2011), and Xue et al. (2012) in the covariance graph model; and Lee et al. (2007), H ofling and Tibshirani (2009), and Ravikumar et al. (2010) in the binary model. However, applying an ℓ1 penalty to each edge can be interpreted as placing an independent double-exponential prior on each edge. Consequently, such an approach implicitly assumes that each edge is equally likely and independent of all other edges; this corresponds to an Erd os-R enyi graph in which most nodes have approximately the same number Learning Graphical Models With Hubs of edges (Erd os and R enyi, 1959). This is unrealistic in many real-world networks, in which we believe that certain nodes (which, unfortunately, are not known a priori) have a lot more edges than other nodes. An example is the network of webpages in the World Wide Web, where a relatively small number of webpages are connected to many other webpages (Barab asi and Albert, 1999). A number of authors have shown that real-world networks are scale-free, in the sense that the number of edges for each node follows a power-law distribution; examples include gene-regulatory networks, social networks, and networks of collaborations among scientists (among others, Barab asi and Albert, 1999; Barab asi, 2009; Liljeros et al., 2001; Jeong et al., 2001; Newman, 2000; Li et al., 2005). More recently, Hao et al. (2012) have shown that certain genes, referred to as super hubs, regulate hundreds of downstream genes in a gene regulatory network, resulting in far denser connections than are typically seen in a scale-free network. In this paper, we refer to very densely-connected nodes, such as the super hubs considered in Hao et al. (2012), as hubs. When we refer to hubs, we have in mind nodes that are connected to a very substantial number of other nodes in the network and in particular, we are referring to nodes that are much more densely-connected than even the most highly-connected node in a scale-free network. An example of a network containing hub nodes is shown in Figure 1. Here we propose a convex penalty function for estimating graphs containing hubs. Our formulation simultaneously identifies the hubs and estimates the entire graph. The penalty function yields a convex optimization problem when combined with a convex loss function. We consider the application of this hub penalty function in modeling Gaussian graphical models, covariance graph models, and binary Ising models. Our formulation does not require that we know a priori which nodes in the network are hubs. In related work, several authors have proposed methods to estimate a scale-free Gaussian graphical model (Liu and Ihler, 2011; Defazio and Caetano, 2012). However, those methods do not model hub nodes the most highly-connected nodes that arise in a scale-free network are far less connected than the hubs that we consider in our formulation. Under a different framework, some authors proposed a screening-based procedure to identify hub nodes in the context of Gaussian graphical models (Hero and Rajaratnam, 2012; Firouzi and Hero, 2013). Our proposal outperforms such approaches when hub nodes are present (see discussion in Section 3.5.4). In Figure 1, the performance of our proposed approach is shown in a toy example in the context of a Gaussian graphical model. We see that when the true network contains hub nodes (Figure 1(a)), our proposed approach (Figure 1(b)) is much better able to recover the network than is the graphical lasso (Figure 1(c)), a well-studied approach that applies an ℓ1 penalty to each edge in the graph (Friedman et al., 2007). We present the hub penalty function in Section 2. We then apply it to the Gaussian graphical model, the covariance graph model, and the binary Ising model in Sections 3, 4, and 5, respectively. In Section 6, we apply our approach to a webpage data set and a gene expression data set. We close with a discussion in Section 7. 2. The General Formulation In this section, we present a general framework to accommodate network with hub nodes. Tan, London, Mohan, Lee, Fazel, and Witten (a) (b) (c) Figure 1: (a): Heatmap of the inverse covariance matrix in a toy example of a Gaussian graphical model with four hub nodes. White elements are zero and colored elements are non-zero in the inverse covariance matrix. Thus, colored elements correspond to edges in the graph. (b): Estimate from the hub graphical lasso, proposed in this paper. (c): Graphical lasso estimate. 2.1 The Hub Penalty Function Let X be a n p data matrix, Θ a p p symmetric matrix containing the parameters of interest, and ℓ(X, Θ) a loss function (assumed to be convex in Θ). In order to obtain a sparse and interpretable graph estimate, many authors have considered the problem minimize Θ S {ℓ(X, Θ) + λ Θ diag(Θ) 1} , (1) where λ is a non-negative tuning parameter, S is some set depending on the loss function, and 1 is the sum of the absolute values of the matrix elements. For instance, in the case of a Gaussian graphical model, we could take ℓ(X, Θ) = log det Θ + trace(SΘ), the negative log-likelihood of the data, where S is the empirical covariance matrix and S is the set of p p positive definite matrices. The solution to (1) can then be interpreted as an estimate of the inverse covariance matrix. The ℓ1 penalty in (1) encourages zeros in the solution. But it typically does not yield an estimate that contains hubs. In order to explicitly model hub nodes in a graph, we wish to replace the ℓ1 penalty in (1) with a convex penalty that encourages a solution that can be decomposed as Z+V+VT , where Z is a sparse symmetric matrix, and V is a matrix whose columns are either entirely zero or almost entirely non-zero (see Figure 2). The sparse elements of Z represent edges between non-hub nodes, and the non-zero columns of V correspond to hub nodes. We achieve this goal via the hub penalty function, which takes the form P(Θ) = min V,Z: Θ=V+VT +Z λ1 Z diag(Z) 1 + λ2 V diag(V) 1 + λ3 j=1 (V diag(V))j q Here λ1, λ2, and λ3 are nonnegative tuning parameters. Sparsity in Z is encouraged via the ℓ1 penalty on its off-diagonal elements, and is controlled by the value of λ1. The ℓ1 and ℓ1/ℓq norms on the columns of V induce group sparsity when q = 2 (Yuan and Lin, 2007b; Simon et al., 2013); λ3 controls the selection of hub nodes, and λ2 controls the sparsity of Learning Graphical Models With Hubs Figure 2: Decomposition of a symmetric matrix Θ into Z + V + VT , where Z is sparse, and most columns of V are entirely zero. Blue, white, green, and red elements are diagonal, zero, non-zero in Z, and non-zero due to two hubs in V, respectively. each hub node s connections to other nodes. The convex penalty (2) can be combined with ℓ(X, Θ) to yield the convex optimization problem minimize Θ S,V,Z ℓ(X, Θ) + λ1 Z diag(Z) 1 + λ2 V diag(V) 1 j=1 (V diag(V))j q subject to Θ = V + VT + Z, (3) where the set S depends on the loss function ℓ(X, Θ). Note that when λ2 or λ3 , then (3) reduces to (1). In this paper, we take q = 2, which leads to estimation of a network containing dense hub nodes. Other values of q such as q = are also possible (see, e.g., Mohan et al., 2014). We note that the hub penalty function is closely related to recent work on overlapping group lasso penalties in the context of learning multiple sparse precision matrices (Mohan et al., 2014). 2.2 Algorithm In order to solve (3) with q = 2, we use an alternating direction method of multipliers (ADMM) algorithm (see, e.g., Eckstein and Bertsekas, 1992; Boyd et al., 2010; Eckstein, 2012). ADMM is an attractive algorithm for this problem, as it allows us to decouple some of the terms in (3) that are difficult to optimize jointly. In order to develop an ADMM algorithm for (3) with guaranteed convergence, we reformulate it as a consensus problem, as in Ma et al. (2013). The convergence of the algorithm to the optimal solution follows from classical results (see, e.g., the review papers Boyd et al., 2010; Eckstein, 2012). In greater detail, we let B = (Θ, V, Z), B = ( Θ, V, Z), f(B) = ℓ(X, Θ) + λ1 Z diag(Z) 1 + λ2 V diag(V) 1 + λ3 j=1 (V diag(V)) 2, Tan, London, Mohan, Lee, Fazel, and Witten Algorithm 1 ADMM Algorithm for Solving (3). 1. Initialize the parameters: (a) primal variables Θ, V, Z, Θ, V, and Z to the p p identity matrix. (b) dual variables W1, W2, and W3 to the p p zero matrix. (c) constants ρ > 0 and τ > 0. 2. Iterate until the stopping criterion Θt Θt 1 2 F Θt 1 2 F τ is met, where Θt is the value of Θ obtained at the tth iteration: (a) Update Θ, V, Z: i. Θ = arg min Θ S n ℓ(X, Θ) + ρ 2 Θ Θ + W1 2 F o . ii. Z = S( Z W3, λ1 ρ ), diag(Z) = diag( Z W3). Here S denotes the soft-thresholding operator, applied element-wise to a matrix: S(Aij, b) = sign(Aij) max(|Aij| b, 0). iii. C = V W2 diag( V W2). iv. Vj = max 1 λ3 ρ S(Cj,λ2/ρ) 2 , 0 S(Cj, λ2/ρ) for j = 1, . . . , p. v. diag(V) = diag( V W2). (b) Update Θ, V, Z: 6 (Θ + W1) (V + W2) (V + W2)T (Z + W3) . ii. Θ = Θ + W1 1 ρΓ; iii. V = 1 ρ(Γ + ΓT ) + V + W2; iv. Z = 1 ρΓ + Z + W3. (c) Update W1, W2, W3: i. W1 = W1 + Θ Θ; ii. W2 = W2 + V V; iii. W3 = W3 + Z Z. Learning Graphical Models With Hubs ( 0 if Θ = V + VT + Z otherwise. Then, we can rewrite (3) as minimize B, B n f(B) + g( B) o subject to B = B. (4) The scaled augmented Lagrangian for (4) takes the form L(B, B, W) = ℓ(X, Θ) + λ1 Z diag(Z) 1 + λ2 V diag(V) 1 j=1 (V diag(V))j 2 + g( B) + ρ 2 B B + W 2 F , where B and B are the primal variables, and W = (W1, W2, W3) is the dual variable. Note that the scaled augmented Lagrangian can be derived from the usual Lagrangian by adding a quadratic term and completing the square (Boyd et al., 2010). A general algorithm for solving (3) is provided in Algorithm 1. The derivation is in Appendix A. Note that only the update for Θ (Step 2(a)i) depends on the form of the convex loss function ℓ(X, Θ). In the following sections, we consider special cases of (3) that lead to estimation of Gaussian graphical models, covariance graph models, and binary networks with hub nodes. 3. The Hub Graphical Lasso Assume that x1, . . . , xn i.i.d. N(0, Σ). The well-known graphical lasso problem (see, e.g., Friedman et al., 2007) takes the form of (1) with ℓ(X, Θ) = log det Θ + trace(SΘ), and S the empirical covariance matrix of X: minimize Θ S log det Θ + trace(SΘ) + λ X j =j |Θjj | where S = {Θ : Θ 0 and Θ = ΘT }. The solution to this optimization problem serves as an estimate for Σ 1. We now use the hub penalty function to extend the graphical lasso in order to accommodate hub nodes. 3.1 Formulation and Algorithm We propose the hub graphical lasso (HGL) optimization problem, which takes the form minimize Θ S { log det Θ + trace(SΘ) + P(Θ)} . (6) Again, S = {Θ : Θ 0 and Θ = ΘT }. It encourages a solution that contains hub nodes, as well as edges that connect non-hubs (Figure 1). Problem (6) can be solved using Algorithm 1. The update for Θ in Algorithm 1 (Step 2(a)i) can be derived by minimizing log det Θ + trace(SΘ) + ρ 2 Θ Θ + W1 2 F (7) Tan, London, Mohan, Lee, Fazel, and Witten with respect to Θ (note that the constraint Θ S in (6) is treated as an implicit constraint, due to the domain of definition of the log det function). This can be shown to have the solution where UDUT denotes the eigen-decomposition of Θ W1 1 ρS. The complexity of the ADMM algorithm for HGL is O(p3) per iteration; this is the complexity of the eigen-decomposition for updating Θ. We now briefly compare the computational time for the ADMM algorithm for solving (6) to that of an interior point method (using the solver Sedumi called from cvx). On a 1.86 GHz Intel Core 2 Duo machine, the interior point method takes 3 minutes, while ADMM takes only 1 second, on a data set with p = 30. We present a more extensive run time study for the ADMM algorithm for HGL in Appendix E. 3.2 Conditions for HGL Solution to be Block Diagonal In order to reduce computations for solving the HGL problem, we now present a necessary condition and a sufficient condition for the HGL solution to be block diagonal, subject to some permutation of the rows and columns. The conditions depend only on the tuning parameters λ1, λ2, and λ3. These conditions build upon similar results in the context of Gaussian graphical models from the recent literature (see, e.g., Witten et al., 2011; Mazumder and Hastie, 2012; Yang et al., 2012b; Danaher et al., 2014; Mohan et al., 2014). Let C1, C2, . . . , CK denote a partition of the p features. Theorem 1 A sufficient condition for the HGL solution to be block diagonal with blocks given by C1, C2, . . . , CK is that min n λ1, λ2 2 o > |Sjj | for all j Ck, j Ck , k = k . Theorem 2 A necessary condition for the HGL solution to be block diagonal with blocks given by C1, C2, . . . , CK is that min n λ1, λ2+λ3 2 o > |Sjj | for all j Ck, j Ck , k = k . Theorem 1 implies that one can screen the empirical covariance matrix S to check if the HGL solution is block diagonal (using standard algorithms for identifying the connected components of an undirected graph; see, e.g., Tarjan, 1972). Suppose that the HGL solution is block diagonal with K blocks, containing p1, . . . , p K features, and PK k=1 pk = p. Then, one can simply solve the HGL problem on the features within each block separately. Recall that the bottleneck of the HGL algorithm is the eigen-decomposition for updating Θ. The block diagonal condition leads to massive computational speed-ups for implementing the HGL algorithm: instead of computing an eigen-decomposition for a p p matrix in each iteration of the HGL algorithm, we compute the eigen-decomposition of K matrices of dimensions p1 p1, . . . , p K p K. The computational complexity per-iteration is reduced from O(p3) to PK k=1 O(p3 k). We illustrate the reduction in computational time due to these results in an example with p = 500. Without exploiting Theorem 1, the ADMM algorithm for HGL (with a particular value of λ) takes 159 seconds; in contrast, it takes only 22 seconds when Theorem 1 is applied. The estimated precision matrix has 107 connected components, the largest of which contains 212 nodes. Learning Graphical Models With Hubs 3.3 Some Properties of HGL We now present several properties of the HGL optimization problem (6), which can be used to provide guidance on the suitable range for the tuning parameters λ1, λ2, and λ3. In what follows, Z and V denote the optimal solutions for Z and V in (6). Let 1 q = 1 (recall that q appears in Equation 2). Lemma 3 A sufficient condition for Z to be a diagonal matrix is that λ1 > λ2+λ3 Lemma 4 A sufficient condition for V to be a diagonal matrix is that λ1 < λ2 2 + λ3 2(p 1)1/s . Corollary 5 A necessary condition for both V and Z to be non-diagonal matrices is that λ2 2 + λ3 2(p 1)1/s λ1 λ2+λ3 Furthermore, (6) reduces to the graphical lasso problem (5) under a simple condition. Lemma 6 If q = 1, then (6) reduces to (5) with tuning parameter min n λ1, λ2+λ3 Note also that when λ2 or λ3 , (6) reduces to (5) with tuning parameter λ1. However, throughout the rest of this paper, we assume that q = 2, and λ2 and λ3 are finite. The solution ˆΘ of (6) is unique, since (6) is a strictly convex problem. We now consider the question of whether the decomposition ˆΘ = ˆV + ˆVT + ˆZ is unique. We see that the decomposition is unique in a certain regime of the tuning parameters. For instance, according to Lemma 3, when λ1 > λ2+λ3 2 , ˆZ is a diagonal matrix and hence ˆV is unique. Similarly, according to Lemma 4, when λ1 < λ2 2 + λ3 2(p 1)1/s , ˆV is a diagonal matrix and hence ˆZ is unique. Studying more general conditions on S and on λ1, λ2, and λ3 such that the decomposition is guaranteed to be unique is a challenging problem and is outside of the scope of this paper. 3.4 Tuning Parameter Selection In this section, we propose a Bayesian information criterion (BIC)-type quantity for tuning parameter selection in (6). Recall from Section 2 that the hub penalty function (2) decomposes the parameter of interest into the sum of three matrices, Θ = Z + V + VT , and places an ℓ1 penalty on Z, and an ℓ1/ℓ2 penalty on V. For the graphical lasso problem in (5), many authors have proposed to select the tuning parameter λ such that ˆΘ minimizes the following quantity: n log det( ˆΘ) + n trace(S ˆΘ) + log(n) | ˆΘ|, where | ˆΘ| is the cardinality of ˆΘ, that is, the number of unique non-zeros in ˆΘ (see, e.g., Yuan and Lin, 2007a).1 1. The term log(n) | ˆΘ| is motivated by the fact that the degrees of freedom for an estimate involving the ℓ1 penalty can be approximated by the cardinality of the estimated parameter (Zou et al., 2007). Tan, London, Mohan, Lee, Fazel, and Witten Using a similar idea, we propose the following BIC-type quantity for selecting the set of tuning parameters (λ1, λ2, λ3) for (6): BIC( ˆΘ, ˆV, ˆZ) = n log det( ˆΘ) + n trace(S ˆΘ) + log(n) |ˆZ| + log(n) ν + c [| ˆV| ν] , where ν is the number of estimated hub nodes, that is, ν = Pp j=1 1{ ˆVj 0>0}, c is a constant between zero and one, and |ˆZ| and | ˆV| are the cardinalities (the number of unique nonzeros) of ˆZ and ˆV, respectively.2 We select the set of tuning parameters (λ1, λ2, λ3) for which the quantity BIC( ˆΘ, ˆV, ˆZ) is minimized. Note that when the constant c is small, BIC( ˆΘ, ˆV, ˆZ) will favor more hub nodes in ˆV. In this manuscript, we take c = 0.2. 3.5 Simulation Study In this section, we compare HGL to two sets of proposals: proposals that learn an Erd os R enyi Gaussian graphical model, and proposals that learn a Gaussian graphical model in which some nodes are highly-connected. 3.5.1 Notation and Measures of Performance We start by defining some notation. Let ˆΘ be the estimate of Θ = Σ 1 from a given proposal, and let ˆΘj be its jth column. Let H denote the set of indices of the hub nodes in Θ (that is, this is the set of true hub nodes in the graph), and let |H| denote the cardinality of the set. In addition, let ˆHr be the set of estimated hub nodes: the set of nodes in ˆΘ that are among the |H| most highly-connected nodes, and that have at least r edges. The values chosen for |H| and r depend on the simulation set-up, and will be specified in each simulation study. We now define several measures of performance that will be used to evaluate the various methods. Number of correctly estimated edges: P j10 5 and |Θjj | =0} . Proportion of correctly estimated hub edges: P j H,j =j 1{|ˆΘjj |>10 5 and |Θjj | =0} P j H,j =j 1{|Θjj | =0} . Proportion of correctly estimated hub nodes:| ˆ Hr H| Sum of squared errors: P j n) are thresholded based on their absolute value, and a hub node is declared if the number of nonzero elements in the corresponding column of the thresholded partial correlation matrix is sufficiently large. Note that the purpose of Hero and Rajaratnam (2012) is to screen for hub nodes, rather than to estimate the individual edges in the network. The scale-free network estimation procedure of Liu and Ihler (2011). This is the solution to the non-convex optimization problem minimize Θ S log det Θ + trace(SΘ) + α j=1 log( θ\j 1 + ϵj) + j=1 βj|θjj| Learning Graphical Models With Hubs 0 20000 40000 60000 0 10000 20000 30000 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 0.0 0.2 0.4 0.6 0.8 1.0 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 340 380 420 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G GGGGGGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0 5000 10000 15000 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0.0 0.2 0.4 0.6 0.8 1.0 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 280 320 360 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G GGGGGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0 100 200 300 400 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 20 40 60 80 100 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 3: Simulation for Gaussian graphical model. Row I: Results for Set-up I. Row II: Results for Set-up II. Row III: Results for Set-up III. The results are for n = 1000 and p = 1500. In each panel, the x-axis displays the number of estimated edges, and the vertical gray line is the number of edges in the true network. The y-axes are as follows: Column (a): Number of correctly estimated edges; Column (b): Proportion of correctly estimated hub edges; Column (c): Proportion of correctly estimated hub nodes; Column (d): Sum of squared errors. The black solid circles are the results for HGL based on tuning parameters selected using the BIC-type criterion defined in Section 3.4. Colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with λ3 = 0.5 ( ), λ3 = 1 ( ), and λ3 = 2 ( ); neighborhood selection (Meinshausen and B uhlmann, 2006) ( Tan, London, Mohan, Lee, Fazel, and Witten where θ\j = {θjj |j = j}, and ϵj, βj, and α are tuning parameters. Here, S = {Θ : Θ 0 and Θ = ΘT }. Sparse partial correlation estimation procedure of Peng et al. (2009), implemented using the R package space. This is an extension of the neighborhood selection approach of Meinshausen and B uhlmann (2006) that combines p ℓ1-penalized regression problems in order to obtain a symmetric estimator. The authors claimed that the proposal performs well in estimating a scale-free network. We generated data under Set-ups I and III (described in Section 3.5.2) with n = 250 and p = 500,5 and with |H| = 10 for Set-up I. The results, averaged over 100 data sets, are displayed in Figures 4 and 5. To obtain Figures 4 and 5, we applied Liu and Ihler (2011) using a fine grid of α values, and using the choices for βj and ϵj specified by the authors: βj = 2α/ϵj, where ϵj is a small constant specified in Liu and Ihler (2011). There are two tuning parameters in Hero and Rajaratnam (2012): (1) ρ, the value used to threshold the partial correlation matrix, and (2) d, the number of non-zero elements required for a column of the thresholded matrix to be declared a hub node. We used d = {10, 20} in Figures 4 and 5, and used a fine grid of values for ρ. Note that the value of d has no effect on the results for Figures 4(a)-(b) and Figures 5(a)-(b), and that larger values of d tend to yield worse results in Figures 4(c) and 5(c). For Peng et al. (2009), we used a fine grid of tuning parameter values to obtain the curves shown in Figures 4 and 5. The sum of squared errors was not reported for Peng et al. (2009) and Hero and Rajaratnam (2012) since they do not directly yield an estimate of the precision matrix. As a baseline reference, the graphical lasso is included in the comparison. We see from Figure 4 that HGL outperforms the competitors when the underlying network contains hub nodes. It is not surprising that Liu and Ihler (2011) yields better results than the graphical lasso, since the former approach is implemented via an iterative procedure: in each iteration, the graphical lasso is performed with an updated tuning parameter based on the estimate obtained in the previous iteration. Hero and Rajaratnam (2012) has the worst results in Figures 4(a)-(b); this is not surprising, since the purpose of Hero and Rajaratnam (2012) is to screen for hub nodes, rather than to estimate the individual edges in the network. From Figure 5, we see that the performance of HGL is comparable to that of Liu and Ihler (2011) and Peng et al. (2009) under the assumption of a scale-free network; note that this is the precise setting for which Liu and Ihler (2011) s proposal is intended, and Peng et al. (2009) reported that their proposal performs well in this setting. In contrast, HGL is not intended for the scale-free network setting (as mentioned in the Introduction, it is intended for a setting with hub nodes). Again, Liu and Ihler (2011) and Peng et al. (2009) outperform the graphical lasso, and Hero and Rajaratnam (2012) has the worst results in Figures 5(a)-(b). Finally, we see from Figures 4 and 5 that the BIC-type criterion for HGL proposed in Section 3.4 yields good results. 5. In this subsection, a small value of p was used due to the computations required to run the R package space, as well as computational demands of the Liu and Ihler (2011) algorithm. Learning Graphical Models With Hubs 0 2000 4000 6000 0 1000 2000 3000 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 0.0 0.2 0.4 0.6 0.8 1.0 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 130 140 150 160 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G GGGGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 4: Simulation for the Gaussian graphical model. Set-up I was applied with n = 250 and p = 500. Details of the axis labels and the solid black circles are as in Figure 3. The colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with λ3 = 1 ( ), λ3 = 2 ( ), and λ3 = 3 ( ); the hub screening procedure (Hero and Rajaratnam, 2012) with d = 10 ( ) and d = 20 ( ); the scale-free network approach (Liu and Ihler, 2011) ( ); sparse partial correlation estimation (Peng et al., 2009) ( 0 100 300 500 0 50 100 150 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 0.0 0.2 0.4 0.6 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 0.0 0.2 0.4 0.6 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 30 40 50 60 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 5: Simulation for the Gaussian graphical model. Set-up III was applied with n = 250 and p = 500. Details of the axis labels and the solid black circles are as in Figure 3. The colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with λ3 = 1 ( ), λ3 = 2 ( ), and λ3 = 3 ( ); the hub screening procedure (Hero and Rajaratnam, 2012) with d = 10 ( ) and d = 20 ( ); the scale-free network approach (Liu and Ihler, 2011) ( ); sparse partial correlation estimation (Peng et al., 2009) ( Tan, London, Mohan, Lee, Fazel, and Witten 4. The Hub Covariance Graph In this section, we consider estimation of a covariance matrix under the assumption that x1, . . . , xn i.i.d. N(0, Σ); this is of interest because the sparsity pattern of Σ specifies the structure of the marginal independence graph (see, e.g., Drton and Richardson, 2003; Chaudhuri et al., 2007; Drton and Richardson, 2008). We extend the covariance estimator of Xue et al. (2012) to accommodate hub nodes. 4.1 Formulation and Algorithm Xue et al. (2012) proposed to estimate Σ using ˆΣ = arg min Σ S 2 Σ S 2 F + λ Σ 1 where S is the empirical covariance matrix, S = {Σ : Σ ϵI and Σ = ΣT }, and ϵ is a small positive constant; we take ϵ = 10 4. We extend (9) to accommodate hubs by imposing the hub penalty function (2) on Σ. This results in the hub covariance graph (HCG) optimization problem, minimize Σ S 2 Σ S 2 F + P(Σ) , which can be solved via Algorithm 1. To update Θ = Σ in Step 2(a)i, we note that arg min Σ S 2 Σ S 2 F + ρ 2 Σ Σ + W1 2 F = 1 1 + ρ(S + ρ Σ ρW1)+, where (A)+ is the projection of a matrix A onto the convex cone {Σ ϵI}. That is, if Pp j=1 djuju T j denotes the eigen-decomposition of the matrix A, then (A)+ is defined as Pp j=1 max(dj, ϵ)uju T j . The complexity of the ADMM algorithm is O(p3) per iteration, due to the complexity of the eigen-decomposition for updating Σ. 4.2 Simulation Study We compare HCG to two competitors for obtaining a sparse estimate of Σ: 1. The non-convex ℓ1-penalized log-likelihood approach of Bien and Tibshirani (2011), using the R package spcov. This approach solves minimize Σ 0 log det Σ + trace(Σ 1S) + λ Σ 1 . 2. The convex ℓ1-penalized approach of Xue et al. (2012), given in (9). We first generated an adjacency matrix A as in Set-up I in Section 3.5.2, modified to have |H| = 20 hub nodes. Then E was generated as described in Section 3.5.2, and we set Σ equal to E + (0.1 Λmin( E))I. Next, we generated x1, . . . , xn i.i.d. N(0, Σ). Finally, we standardized the variables to have standard deviation one. In this simulation study, we set n = 500 and p = 1000. Learning Graphical Models With Hubs Figure 6 displays the results, averaged over 100 simulated data sets. We calculated the proportion of correctly estimated hub nodes as defined in Section 3.3.1 with r = 200. We used a fine grid of tuning parameters for Xue et al. (2012) in order to obtain the curves shown in each panel of Figure 6. HCG involves three tuning parameters, λ1, λ2, and λ3. We fixed λ1 = 0.2, considered three values of λ3 (each shown in a different color), and varied λ2 in order to obtain the curves shown in Figure 6. Figure 6 does not display the results for the proposal of Bien and Tibshirani (2011), due to computational constraints in the spcov R package. Instead, we compared our proposal to that of Bien and Tibshirani (2011) using n = 100 and p = 200; those results are presented in Figure 10 in Appendix D. G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0 4000 8000 12000 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0.0 0.2 0.4 0.6 0.8 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0.0 0.2 0.4 0.6 0.8 1.0 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 80 85 90 95 100 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 6: Covariance graph simulation with n = 500 and p = 1000. Details of the axis labels are as in Figure 3. The colored lines correspond to the proposal of Xue et al. (2012) ( ); HCG with λ3 = 1 ( ), λ3 = 1.5 ( ), and λ3 = 2 ( We see that HCG outperforms the proposals of Xue et al. (2012) (Figures 6 and 10) and Bien and Tibshirani (2011) (Figure 10). These results are not surprising, since those other methods do not explicitly model the hub nodes. 5. The Hub Binary Network In this section, we focus on estimating a binary Ising Markov random field, which we refer to as a binary network. We refer the reader to Ravikumar et al. (2010) for an in-depth discussion of this type of graphical model and its applications. In this set-up, each entry of the n p data matrix X takes on a value of zero or one. We assume that the observations x1, . . . , xn are i.i.d. with density p(x, Θ) = 1 Z(Θ) exp j=1 θjjxj + X 1 j ΘT , S + λ1 ZT diag(ZT ) 1 + λ2 VT diag(VT ) 1 + λ3 VT diag(VT ) 1,q, or equivalently, that ΘT c, S + λ1 ZT c 1 + λ2 VT c 1 + λ3( V diag(V) 1,q VT diag(VT ) 1,q) > 0. Since V diag(V) 1,q VT diag(VT ) 1,q, it suffices to show that ΘT c, S + λ1 ZT c 1 + λ2 VT c 1 > 0. (17) Note that ΘT c, S = ΘT c, ST c . By the sufficient condition, Smax < λ1 and 2Smax < λ2. Learning Graphical Models With Hubs In addition, we have that | ΘT c, S | = | ΘT c, ST c | = | VT c + VT T c + ZT c, ST c | = | 2VT c + ZT c, ST c | (2 VT c 1 + ZT c 1)Smax < λ2 VT c 1 + λ1 ZT c 1, where the last inequality follows from the sufficient condition. We have shown (17) as desired. Proof of Theorem 2 (Necessary Condition) We first present a simple lemma for proving Theorem 2. Throughout the proof of Theorem 2, indicates the maximal absolute element of a matrix and ,s indicates the dual norm of 1,q. Lemma 7 The dual representation of P(Θ) in (15) is P (Θ) = max X,Y,Λ Λ, Θ subject to Λ + ΛT = ˆλ2X + ˆλ3Y X 1, Λ 1, Y ,s 1 Xii = 0, Yii = 0, Λii = 0 for i = 1, . . . , p, Proof We first state the dual representations for the norms in (15): Z diag(Z) 1 = max Λ Λ, Z subject to Λ 1, Λii = 0 for i = 1, . . . , p, V diag(V) 1 = max X X, V subject to X 1, Xii = 0 for i = 1, . . . , p, V diag(V) 1,q = max Y Y, V subject to Y ,s 1, Yii = 0 for i = 1, . . . , p. Tan, London, Mohan, Lee, Fazel, and Witten P(Θ) = min V,Z Z diag(Z) 1 + ˆλ2 V diag(V) 1 + ˆλ3 V diag(V) 1,q subject to Θ = Z + V + VT = min V,Z max Λ,X,Y Λ, Z + ˆλ2 X, V + ˆλ3 Y, V subject to Λ 1, X 1, Y ,s 1 Λii = 0, Xii = 0, Yii = 0 for i = 1, . . . , p Θ = Z + V + VT = max Λ,X,Y min V,Z Λ, Z + ˆλ2 X, V + ˆλ3 Y, V subject to Λ 1, X 1, Y ,s 1 Λii = 0, Xii = 0, Yii = 0 for i = 1, . . . , p Θ = Z + V + VT = max Λ,X,Y Λ, Θ subject to Λ + ΛT = ˆλ2X + ˆλ3Y X 1, Λ 1, Y ,s 1 Xii = 0, Yii = 0, Λii = 0 for i = 1, . . . , p. The third equality holds since the constraints on (V, Z) and on (Λ, X, Y) are both compact convex sets and so by the minimax theorem, we can swap max and min. The last equality follows from the fact that min V,Z Λ, Z + ˆλ2 X, V + ˆλ3 Y, V subject to Θ = Z + V + VT = Λ, Θ if Λ + ΛT = ˆλ2X + ˆλ3Y otherwise. We now present the proof of Theorem 2. Proof The optimality condition for (16) is given by 0 = Θ 1 + S + λ1Λ, (19) where Λ is a subgradient of P(Θ) in (15) and the left-hand side of the above equation is a zero matrix of size p p. Now suppose that Θ that solves (19) is supported on T , i.e., Θ T c = 0. Then for any (i, j) T c, we have that 0 = Sij + λ1Λ ij, (20) where Λ is a subgradient of P(Θ ). Note that Λ must be an optimal solution to the optimization problem (18). Therefore, it is also a feasible solution to (18), implying that |Λ ij + Λ ji| ˆλ2 + ˆλ3, Learning Graphical Models With Hubs From (20), we have that Λ ij = Sij λ1 and thus, λ1 λ1 max (i,j) T c|Λ ij| = λ1 max (i,j) T c |Sij| Also, recall that ˆλ2 = λ2 λ1 and ˆλ3 = λ3 λ1 . We have that λ2 + λ3 λ1 max (i,j) T c|Λ ij + Λ ji| = λ1 max (i,j) T c 2|Sij| λ1 = 2Smax. Hence, we obtain the desired result. Appendix C. Some Properties of HGL Proof of Lemma 3 Proof Let (Θ , Z , V ) be the solution to (6) and suppose that Z is not a diagonal matrix. Note that Z is symmetric since Θ S {Θ : Θ 0 and Θ = ΘT }. Let ˆZ = diag(Z ), a matrix that contains the diagonal elements of the matrix Z . Also, construct ˆV as follows, ( V ij + Z ij 2 if i = j V jj otherwise. Then, we have that Θ = ˆZ + ˆV + ˆVT . Thus, (Θ , ˆZ, ˆV) is a feasible solution to (6). We now show that (Θ , ˆZ, ˆV) has a smaller objective than (Θ , Z , V ) in (6), giving us a contradiction. Note that λ1 ˆZ diag(ˆZ) 1 + λ2 ˆV diag( ˆV) 1 = λ2 ˆV diag( ˆV) 1 = λ2 P i =j |V ij + Z ij 2 | λ2 V diag(V ) 1 + λ2 2 Z diag(Z ) 1, λ3 Pp j=1 ( ˆV diag( ˆV))j q λ3 Pp j=1 (V diag(V ))j q + λ3 2 Pp j=1 (Z diag(Z ))j q λ3 Pp j=1 (V diag(V ))j q + λ3 2 Z diag(Z ) 1, Tan, London, Mohan, Lee, Fazel, and Witten where the last inequality follows from the fact that for any vector x Rp and q 1, x q is a nonincreasing function of q (Gentle, 2007). Summing up the above inequalities, we get that λ1 ˆZ diag(ˆZ) 1 + λ2 ˆV diag( ˆV) 1 + λ3 Pp j=1 ( ˆV diag( ˆV))j q λ2+λ3 2 Z diag(Z ) 1 + λ2 V diag(V ) 1 + λ3 Pp j=1 (V diag(V ))j q < λ1 Z diag(Z ) 1 + λ2 V diag(V ) 1 + λ3 Pp j=1 (V diag(V ))j q, where the last inequality uses the assumption that λ1 > λ2+λ3 2 . We arrive at a contradiction and therefore the result holds. Proof of Lemma 4 Proof Let (Θ , Z , V ) be the solution to (6) and suppose V is not a diagonal matrix. Let ˆV = diag(V ), a diagonal matrix that contains the diagonal elements of V . Also construct ˆZ as follows, ˆZij = Z ij + V ij + V ji if i = j Z ij otherwise. Then, we have that Θ = ˆV+ ˆVT + ˆZ. We now show that (Θ , ˆZ, ˆV) has a smaller objective value than (Θ , Z , V ) in (6), giving us a contradiction. We start by noting that λ1 ˆZ diag(ˆZ) 1 + λ2 ˆV diag( ˆV) 1 = λ1 ˆZ diag(ˆZ) 1 λ1 Z diag(Z ) 1 + 2λ1 V diag(V ) 1. By Holder s Inequality, we know that x T y x q y s where 1 q = 1 and x, y Rp 1. Setting y = sign(x), we have that x 1 (p 1) 1 s x q. Consequently, λ3 (p 1) 1 s V diag(V ) 1 λ3 j=1 (V diag(V ))j q. Combining these results, we have that λ1 ˆZ diag(ˆZ) 1 + λ2 ˆV diag( ˆV) 1 + λ3 j=1 ( ˆV diag( ˆV))j q λ1 Z diag(Z ) 1 + 2λ1 V diag(V ) 1 < λ1 Z diag(Z ) 1 + λ2 + λ3 (p 1) 1 s V diag(V ) 1 λ1 Z diag(Z ) 1 + λ2 V diag(V ) 1 + λ3 j=1 (V diag(V ))j q, where we use the assumption that λ1 < λ2 2 + λ3 2(p 1) 1 s . This leads to a contradiction. Learning Graphical Models With Hubs Proof of Lemma 6 In this proof, we consider the case when λ1 > λ2+λ3 2 . A similar proof technique can be used to prove the case when λ1 < λ2+λ3 Proof Let f(Θ, V, Z) denote the objective of (6) with q = 1, and (Θ , V , Z ) the optimal solution. By Lemma 3, the assumption that λ1 > λ2+λ3 2 implies that Z is a diagonal matrix. Now let ˆV = 1 2 V + (V )T . Then f(Θ , ˆV, Z ) = log det Θ + Θ , S + λ1 Z diag(Z ) 1 + (λ2 + λ3) ˆV diag( ˆV) 1 = log det Θ + Θ , S + λ2 + λ3 2 V + V T diag(V + V T ) 1 log det Θ + Θ , S + (λ2 + λ3) V diag(V ) 1 = f(Θ , V , Z ) f(Θ , ˆV, Z ), where the last inequality follows from the assumption that (Θ , V , Z ) solves (6). By strict convexity of f, this means that V = ˆV, i.e., V is symmetric. This implies that f(Θ , V , Z ) = log det Θ + Θ , S + λ2 + λ3 2 V + V T diag(V + V T ) 1 = log det Θ + Θ , S + λ2 + λ3 2 Θ diag(Θ ) 1 (21) where g(Θ) is the objective of the graphical lasso optimization problem, evaluated at Θ, with tuning parameter λ2+λ3 2 . Suppose that Θ minimizes g(Θ), and Θ = Θ. Then, by (21) and strict convexity of g, g(Θ ) = f(Θ , V , Z ) f( Θ, Θ/2, 0) = g( Θ) < g(Θ ), giving us a contradiction. Thus it must be that Θ = Θ . Appendix D. Simulation Study for Hub Covariance Graph In this section, we present the results for the simulation study described in Section 4.2 with n = 100, p = 200, and |H| = 4. We calculate the proportion of correctly estimated hub nodes with r = 40. The results are shown in Figure 10. As we can see from Figure 10, our proposal outperforms Bien and Tibshirani (2011). In particular, we can see from Figure 10(c) that Bien and Tibshirani (2011) fails to identify hub nodes. Appendix E. Run Time Study for the ADMM algorithm for HGL In this section, we present a more extensive run time study for the ADMM algorithm for HGL. We ran experiments with p = 100, 200, 300 and with n = p/2 on a 2.26GHz Intel Core Tan, London, Mohan, Lee, Fazel, and Witten G G G G G G G G G G G G G G G G G G G G G G G G G 0 200 600 1000 0 50 150 250 350 Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGGGGGGGGGGGG GGGG GGGG 0 200 600 1000 0.0 0.2 0.4 0.6 Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGGGGGGGGGGGG GGGG GGGG G GG 0 200 600 1000 0.0 0.2 0.4 0.6 Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGGGGGGGGGGGG GGGG 0 200 600 1000 18 19 20 21 22 Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGGGGGGGGGGGG GGGG GGGG G GG Figure 10: Covariance graph simulation with n = 100 and p = 200. Details of the axis labels are as in Figure 3. The colored lines correspond to the proposal of Xue et al. (2012) ( ); HCG with λ3 = 1 ( ), λ3 = 1.5 ( ), and λ3 = 2 ( ); and the proposal of Bien and Tibshirani (2011) ( 2 Duo machine. Results averaged over 10 replications are displayed in Figures 11(a)-(b), where the panels depict the run time and number of iterations required for the algorithm to converge, as a function of λ1, with λ2 = 0.5 and λ3 = 2 fixed. The number of iterations required for the algorithm to converge is computed as the total number of iterations in Step 2 of Algorithm 1. We see from Figure 11(a) that as p increases from 100 to 300, the run times increase substantially, but never exceed several minutes. Note that these results are without using the block diagonal condition in Theorem 1. Run time (sec) p = 100 p = 200 p = 300 Total num. iterations p = 100 p = 200 p = 300 Figure 11: (a): Run time (in seconds) of the ADMM algorithm for HGL, as a function of λ1, for fixed values of λ2 and λ3. (b): The total number of iterations required for the ADMM algorithm for HGL to converge, as a function of λ1. All results are averaged over 10 simulated data sets. These results are without using the block diagonal condition in Theorem 1. Learning Graphical Models With Hubs Appendix F. Update for Θ in Step 2(a)i for Binary Ising Model using Barzilai-Borwein Method We consider updating Θ in Step 2(a)i of Algorithm 1 for binary Ising model. Let j =1 θjj (XT X)jj + j =j θjj xij 2 Θ Θ + W1 2 F . Then, the optimization problem for Step 2(a)i of Algorithm 1 is minimize Θ S h(Θ), (22) where S = {Θ : Θ = ΘT }. In solving (22), we will treat Θ S as an implicit constraint. The Barzilai-Borwein method is a gradient descent method with the step-size chosen to mimic the secant condition of the BFGS method (see, e.g., Barzilai and Borwein, 1988; Nocedal and Wright, 2006). The convergence of the Barzilai-Borwein method for unconstrained minimization using a non-monotone line search was shown in Raydan (1997). Recent convergence results for a quadratic cost function can be found in Dai (2013). To implement the Barzilai-Borwein method, we need to evaluate the gradient of h(Θ). Let h(Θ) be a p p matrix, where the (j, j ) entry is the gradient of h(Θ) with respect to θjj , computed under the constraint Θ S, that is, θjj = θj j. Then, ( h(Θ))jj = (XT X)jj + " exp(θjj + P j =j θjj xij ) 1 + exp(θjj + P j =j θjj xij ) + ρ(θjj θjj + (W1)jj), ( h(Θ))jj = 2(XT X)jj + 2ρ(θjj θjj + (W1)jj ) " xij exp(θjj + P j =j θjj xij ) 1 + exp(θjj + P j =j θjj xij ) + xij exp(θj j + P j =j θjj xij) 1 + exp(θj j + P j =j θjj xij) A simple implementation of the Barzilai-Borwein algorithm for solving (22) is detailed in Algorithm 2. We note that the Barzilai-Borwein algorithm can be improved (see, e.g., Barzilai and Borwein, 1988; Wright et al., 2009). We leave such improvement for future work. Tan, London, Mohan, Lee, Fazel, and Witten Algorithm 2 Barzilai-Borwein Algorithm for Solving (22). 1. Initialize the parameters: (a) Θ1 = I and Θ0 = 2I. (b) constant τ > 0. 2. Iterate until the stopping criterion Θt Θt 1 2 F Θt 1 2 F τ is met, where Θt is the value of Θ obtained at the tth iteration: (a) αt = trace (Θt Θt 1)T (Θt Θt 1) /trace (Θt Θt 1)T ( h(Θt) h(Θt 1)) . (b) Θt+1 = Θt αt h(Θt). G.I. Allen and Z. Liu. A log-linear graphical model for inferring genetic networks from high-throughput sequencing data. IEEE International Conference on Bioinformatics and Biomedicine, 2012. A.L. Barab asi. Scale-free networks: A decade and beyond. Science, 325:412 413, 2009. A.L. Barab asi and R. Albert. Emergence of scaling in random networks. Science, 286: 509 512, 1999. J. Barzilai and J.M. Borwein. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8:141 148, 1988. P.J. Bickel and E. Levina. Regularized estimation of large covariance matrices. Annals of Statistics, 36(1):199 227, 2008. J. Bien and R. Tibshirani. Sparse estimation of a covariance matrix. Biometrika, 98(4): 807 820, 2011. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the ADMM. Foundations and Trends in Machine Learning, 3(1): 1 122, 2010. T. Cai and W. Liu. Adaptive thresholding for sparse covariance matrix estimation. Journal of the American Statistical Association, 106(494):672 684, 2011. A. Cardoso-Cachopo. 2009. http://web.ist.utl.pt/acardoso/datasets/ . S. Chaudhuri, M. Drton, and T. Richardson. Estimation of a covariance matrix with zeros. Biometrika, 94:199 216, 2007. Y. Dai. A new analysis on the Barzilai-Borwein gradient method. Journal of the Operations Research Society of China, 1(2):187 198, 2013. P. Danaher, P. Wang, and D.M. Witten. The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society: Series B, 76 (2):373 397, 2014. Learning Graphical Models With Hubs A. Defazio and T.S. Caetano. A convex formulation for learning scale-free network via submodular relaxation. Advances in Neural Information Processing Systems, 2012. M. Drton and T.S. Richardson. A new algorithm for maximum likelihood estimation in Gaussian graphical models for marginal independence. Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence, pages 184 191, 2003. M. Drton and T.S. Richardson. Graphical methods for efficient likelihood inference in Gaussian covariance models. Journal of Machine Learning Research, 9:893 914, 2008. J. Eckstein. Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results. RUTCOR Research Reports, 32, 2012. J. Eckstein and D.P. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(3, Ser. A):293 318, 1992. N. El Karoui. Operator norm consistent estimation of large-dimensional sparse covariance matrices. The Annals of Statistics, 36(6):2717 2756, 2008. P. Erd os and A. R enyi. On random graphs I. Publ. Math. Debrecen, 6:290 297, 1959. H. Firouzi and A.O. Hero. Local hub screening in sparse correlation graphs. Proceedings of SPIE, volume 8858, Wavelets and Sparsity XV, 88581H, 2013. R. Foygel and M. Drton. Extended Bayesian information criteria for Gaussian graphical models. Advances in Neural Information Processing Systems, 2010. J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9:432 441, 2007. J. E. Gentle. Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer, New York, 2007. J. Guo, E. Levina, G. Michailidis, and J. Zhu. Joint structure estimation for categorical Markov networks. Submitted, available at http://www.stat.lsa.umich.edu/ elevina, 2010. J. Guo, E. Levina, G. Michailidis, and J. Zhu. Asymptotic properties of the joint neighborhood selection method for estimating categorical Markov networks. ar Xiv: math.PR/0000000, 2011. D. Hao, C. Ren, and C. Li. Revisiting the variation of clustering coefficient of biological networks suggests new modular structure. BMC System Biology, 6(34):1 10, 2012. A. Hero and B. Rajaratnam. Hub discovery in partial correlation graphs. IEEE Transactions on Information Theory, 58:6064 6078, 2012. H. H ofling and R. Tibshirani. Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods. Journal of Machine Learning Research, 10:883 906, 2009. Tan, London, Mohan, Lee, Fazel, and Witten R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, New York, NY, 1985. H. Jeong, S.P. Mason, A.L. Barab asi, and Z.N. Oltvai. Lethality and centrality in protein networks. Nature, 411:41 42, 2001. S.-I. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of Markov networks using ℓ1-regularization. Advances in Neural Information Processing Systems, 2007. L. Li, D. Alderson, J.C. Doyle, and W. Willinger. Towards a theory of scale-free graphs: Definition, properties, and implications. Internet Mathematics, 2(4):431 523, 2005. F. Liljeros, C.R. Edling, L.A.N. Amaral, H.E. Stanley, and Aberg Y. The web of human sexual contacts. Nature, 411:907 908, 2001. Q. Liu and A.T. Ihler. Learning scale free networks by reweighed ℓ1 regularization. Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 15: 40 48, 2011. S. Ma, L. Xue, and H. Zou. Alternating direction methods for latent variable Gaussian graphical model selection. Neural Computation, 2013. Maglott et al. Entrez Gene: gene-centered information at NCBI. Nucleic Acids Research, 33(D):54 58, 2004. K.V. Mardia, J. Kent, and J.M. Bibby. Multivariate Analysis. Academic Press, 1979. R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. Journal of Machine Learning Research, 13:781 794, 2012. N. Meinshausen and P. B uhlmann. High dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3):1436 1462, 2006. N. Meinshausen and P. B uhlmann. Stability selection (with discussion). Journal of the Royal Statistical Society, Series B, 72:417 473, 2010. K. Mohan, P. London, M. Fazel, D.M. Witten, and S.-I. Lee. Node-based learning of Gaussian graphical models. Journal of Machine Learning Research, 15:445 488, 2014. M.E.J. Newman. The structure of scientific collaboration networks. Proceedings of the National Academy of the United States of America, 98:404 409, 2000. J. Nocedal and S.J. Wright. Numerical Optimization. Springer, 2006. J. Peng, P. Wang, N. Zhou, and J. Zhu. Partial correlation estimation by joint sparse regression model. Journal of the American Statistical Association, 104(486):735 746, 2009. Rappaport et al. Mala Cards: an integrated compendium for diseases and their annotation. Database (Oxford), 2013. Learning Graphical Models With Hubs P. Ravikumar, M.J. Wainwright, and J.D. Lafferty. High-dimensional Ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics, 38(3):1287 1319, 2010. P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935 980, 2011. M. Raydan. The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM Journal on Optimization, 7:26 33, 1997. A. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494 515, 2008. A. Rothman, E. Levina, and J. Zhu. Generalized thresholding of large covariance matrices. Journal of the American Statistical Association, 104:177 186, 2009. N. Simon, J.H. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. Journal of Computational and Graphical Statistics, 22(2):231 245, 2013. R. Tarjan. Depth-first search and linear graph algorithms. SIAM Journal on Computing, 1 (2):146 160, 1972. Verhaak et al. Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1. Cancer Cell, 17(1):98 110, 2010. D.M. Witten, J.H. Friedman, and N. Simon. New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20(4):892 900, 2011. S.J. Wright, R.D. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7):2479 2493, 2009. L. Xue, S. Ma, and H. Zou. Positive definite ℓ1 penalized estimation of large covariance matrices. Journal of the American Statistical Association, 107(500):1480 1491, 2012. E. Yang, G.I. Allen, Z. Liu, and P.K. Ravikumar. Graphical models via generalized linear models. Advances in Neural Information Processing Systems, 2012a. S. Yang, Z. Pan, X. Shen, P. Wonka, and J. Ye. Fused multiple graphical lasso. ar Xiv:1209.2139 [cs.LG], 2012b. M. Yuan. Efficient computation of ℓ1 regularized estimates in Gaussian graphical models. Journal of Computational and Graphical Statistics, 17(4):809 826, 2008. M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(10):19 35, 2007a. M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68:49 67, 2007b. H. Zou, T. Hastie, and R. Tibshirani. On the degrees of freedom of the lasso. The Annals of Statistics, 35(5):2173 2192, 2007.