# a_tight_bound_of_hard_thresholding__ea4d4bbf.pdf Journal of Machine Learning Research 18 (2018) 1-42 Submitted 6/16; Revised 5/17; Published 4/18 A Tight Bound of Hard Thresholding Jie Shen JS2007@RUTGERS.EDU Department of Computer Science Rutgers University Piscataway, NJ 08854, USA Ping Li PINGLI98@GMAIL.COM Baidu Research Bellevue, WA 98004, USA Editor: Sujay Sanghavi This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the ℓ1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex. Keywords: sparsity, hard thresholding, compressed sensing, stochastic optimization 1. Introduction Over the last two decades, pursuing sparse representations has emerged as a fundamental technique throughout bioinformatics (Olshausen and Field, 1997), statistics (Tibshirani, 1996; Efron et al., 2004), signal processing (Chen et al., 1998; Donoho et al., 2006; Donoho, 2006; Cand es and Wakin, 2008) and mathematical science (Chandrasekaran et al., 2012), to name just a few. In order to obtain a sparse solution, a plethora of practical algorithms have been presented, among which two prominent examples are greedy pursuit and convex relaxation (Tropp and Wright, 2010). For instance, as one of the earliest greedy algorithms, orthogonal matching pursuit (OMP) (Pati et al., 1993) repeatedly picks a coordinate as the potential support of a solution. While OMP may fail for some deterministic sensing matrices, Tropp (2004); Tropp and Gilbert (2007) showed that it recovers the true signal with high probability when using random matrices such as Gaussian. Inspired by the success of OMP, the two concurrent work of compressive sampling matching pursuit (Co Sa MP) (Needell c 2018 Jie Shen and Ping Li. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-299.html. SHEN AND LI and Tropp, 2009) and subspace pursuit (SP) (Dai and Milenkovic, 2009) made improvement by selecting multiple coordinates followed by a pruning step in each iteration, and the recovery condition was framed under the restricted isometry property (RIP) (Cand es and Tao, 2005). Interestingly, the more careful selection strategy of Co Sa MP and SP leads to an optimal sample complexity. The iterative hard thresholding (IHT) algorithm (Daubechies et al., 2004; Blumensath and Davies, 2008, 2009) gradually refines the iterates by gradient descent along with truncation. Foucart (2011) then developed a concise algorithm termed hard thresholding pursuit (HTP), which combined the idea of Co Sa MP and IHT, and showed that HTP is superior to both in terms of the RIP condition. Jain et al. (2011) proposed an interesting variant of the HTP algorithm and obtained a sharper RIP result. Recently, Bahmani et al. (2013) and Yuan et al. (2018) respectively extended Co Sa MP and HTP to general objective functions, for which a global convergence was established. Since the sparsity constraint counts the number of non-zero components which renders the problem non-convex, the ℓ1-norm was suggested as a convex relaxation dating back to basis pursuit (Chen et al., 1998; Donoho and Tsaig, 2008) and Lasso (Tibshirani, 1996). The difference is that Lasso looks for an ℓ1-norm constrained solution that minimizes the residual while the principle of basis pursuit is to find a signal with minimal ℓ1-norm that fits the observation data. Cand es and Tao (2005) carried out a detailed analysis on the recovery performance of basis pursuit. Another popular estimator in the high-dimensional statistics is the Dantzig selector (Cand es and Tao, 2007) which, instead of constraining the residual of the linear model, penalizes the maximum magnitude of the gradient. From a computational perspective, both basis pursuit and Dantzig selector can be solved by linear programming, while Lasso is formulated as a quadratic problem. Interestingly, under the RIP condition or the uniform uncertainty assumption (Cand es et al., 2006), a series of work showed that exact recovery by convex programs is possible as soon as the observation noise vanishes (Cand es and Tao, 2005; Cand es, 2008; Wainwright, 2009; Cai et al., 2010; Foucart, 2012). In this paper, we are interested in the hard thresholding (HT) operator underlying a large body of the developed algorithms in compressed sensing (e.g., IHT, Co Sa MP, SP), machine learning (Yuan and Zhang, 2013), and statistics (Ma, 2013). Our motivation is two-fold. From a high level, compared to the convex programs, these HT-based algorithms are always orders of magnitude computationally more efficient, hence more practical for large-scale problems (Tropp and Wright, 2010). Nevertheless, they usually require a more stringent condition to guarantee the success. This naturally raises an interesting question of whether we can derive milder conditions for HT-based algorithms to achieve the best of the two worlds. For practitioners, to address the huge volume of data, a popular strategy in machine learning is to appeal to stochastic algorithms that sequentially update the solution. However, as many researchers observed (Langford et al., 2009; Duchi and Singer, 2009; Xiao, 2010), it is hard for the ℓ1-based stochastic algorithms to preserve the sparse structure of the solution as the batch solvers do. This immediately poses the question of whether we are able to apply the principal idea of hard thresholding to stochastic algorithms while still ensuring a fast convergence. To elaborate the problem more precisely, let us first turn to some basic properties of hard thresholding along with simple yet illustrative cases. For a general vector b Rd, the hard thresholded signal Hk (b) is formed by setting all but the largest (in magnitude) k elements of b to zero. Ties are broken lexicographically. Hence, the hard thresholded signal Hk (b) is always k-sparse, i.e., the number of non-zero components does not exceed k. Moreover, the resultant signal Hk (b) is a best A TIGHT BOUND OF HARD THRESHOLDING k-sparse approximation to b in terms of any ℓp norm (p 1). That is, for any k-sparse vector x Hk (b) b p x b p. In view of the above inequality, a broadly used bound in the literature for the deviation of the thresholded signal is as follows: Hk (b) x 2 2 b x 2 . (1.1) To gain intuition on the utility of (1.1) and to spell out the importance of offering a tight bound for it, let us consider the compressed sensing problem as an example for which we aim to recover the true sparse signal x from its linear measurements. Here, b is a good but dense approximation to x obtained by, e.g., full gradient descent. Then (1.1) justifies that in order to obtain a structured (i.e., sparse) approximation by hard thresholding, the distance of the iterate to the true signal x is upper bounded by a multiple of 2 to the one before. For comparison, it is worth mentioning that ℓ1-based convex algorithms usually utilize the soft thresholding operator which enjoys the nonexpansiveness property (Defazio et al., 2014), i.e., the iterate becomes closer to the optimum after projection. This salient feature might partially attribute to the wide range of applications of the ℓ1-regularized formulations. Hence, to derive comparable performance guarantee, tightening the bound (1.1) is crucial in that it controls how much deviation the hard thresholding operator induces. This turns out to be more demanding for stochastic gradient methods, where the proxy b itself is affected by the randomness of sample realization. In other words, since b does not minimize the objective function (it only optimizes the objective in expectation), the deviation (1.1) makes it more challenging to analyze the convergence behavior. As an example, Nguyen et al. (2014) proposed a stochastic solver for general sparsity-constrained programs but suffered a non-vanishing optimization error due to randomness. This indicates that to mitigate the randomness barrier, we have to seek a better bound to control the precision of the thresholded solution and the variance. 1.1 Summary of Contributions In this work, we make three contributions: 1. We examine the tightness of (1.1) that has been used for a decade in the literature and show that the equality therein will never be attained. We then improve this bound and quantitatively characterize that the deviation is inversely proportional to the value of k. Our bound is tight, in the sense that the equality we build can be attained for specific signals, hence cannot be improved if no additional information is available. Our bound is universal in the sense that it holds for all choices of k-sparse signals x and for general signals b. 2. Owing to the tight estimate, we demonstrate how the RIP (or RIP-like) condition assumed by a wide range of hard thresholding based algorithms can be relaxed. In the context of compressed sensing, it means that in essence, many more kinds of sensing matrices or fewer measurements can be utilized for data acquisition. For machine learning, it suggests that existing algorithms are capable of handling more difficult statistical models. 3. Finally, we present an computationally efficient algorithm that applies hard thresholding in large-scale setting and we prove its linear convergence to a global optimum up to the statistical precision of the problem. We also prove that with sufficient samples, our algorithm identifies SHEN AND LI the true parameter for prevalent statistical models. Returning to (1.1), our analysis shows that only when the deviation is controlled below the multiple of 1.15 can such an algorithm succeed. This immediately implies that the conventional bound (1.1) is not applicable in the challenging scenario. 1.2 Notation Before delivering the algorithm and main theoretical results, let us instate several pieces of notation that are involved throughout the paper. We use bold lowercase letters, e.g., v, to denote a vector (either column or row) and its ith element is denoted by vi. The ℓ2-norm of a vector v is denoted by v 2. The support set of v, i.e., indices of non-zeros, is denoted by supp (v) whose cardinality is written as |supp (v)| or v 0. We write bold capital letters such as M for matrices and its (i, j)-th entry is denoted by mij. The capital upright letter C and its subscript variants (e.g., C0, C1) are reserved for absolute constants whose values may change from appearance to appearance. For an integer d > 0, suppose that Ωis a subset of {1, 2, . . . , d}. Then for a general vector v Rd, we define PΩ( ) as the orthogonal projection onto the support set Ωwhich retains elements contained in Ωand sets others to zero. That is, ( vi, if i Ω, 0, otherwise. In particular, let Γ be the support set indexing the k largest absolute components of v. In this way, the hard thresholding operator is given by Hk (v) = PΓ(v). We will also use the orthogonal projection of a vector v onto an ℓ2-ball with radius ω. That is, Πω(v) = v max{1, v 2 /ω}. 1.3 Roadmap We present the key tight bound for hard thresholding in Section 2, along with a justification why the conventional bound (1.1) is not tight. We then discuss the implications of the developed tight bound to compressed sensing and machine learning in Section 3, which shows that the RIP or RIPlike condition can be improved for a number of popular algorithms. Thanks to our new estimation, Section 4 develops a novel stochastic algorithm which applies hard thresholding to large-scale problems and establishes the global linear convergence. A comprehensive empirical study on the tasks of sparse recovery and binary classification is carried out in Section 5. Finally, We conclude the paper in Section 6 and all the proofs are deferred to the appendix. 2. The Key Bound We argue that the conventional bound (1.1) is not tight, in the sense that the equality therein can hardly be attained. To see this, recall how the bound was derived for a k-sparse signal x and a general one b: Hk (b) x 2 = Hk (b) b + b x 2 ξ Hk (b) b 2 + b x 2 2 b x 2 , A TIGHT BOUND OF HARD THRESHOLDING where the last inequality holds because Hk (b) is a best k-sparse approximation to b. The major issue occurs in ξ. Though it is the well-known triangle inequality and the equality could be attained if there is no restriction on the signals x and b, we remind here that the signal x does have a specific structure it is k-sparse. Note that in order to fulfill the equality in ξ, we must have Hk (b) b = γ(b x) for some γ 0, that is, Hk (b) = (γ + 1)b γx. (2.1) One may verify that the above equality holds if and only if x = b = Hk (b) . (2.2) To see this, let Ωbe the support set of Hk (b) and Ωbe the complement. Let b1 = PΩ(b) = Hk (b) and b2 = PΩ(b). Likewise, we define x1 and x2 as the components of x supported on Ωand Ω respectively. Hence, (2.1) indicates x1 = b1 and x2 = (1 + γ 1)b2 where we assume γ > 0 since γ = 0 immediately implies Hk (b) = b and hence the equality of (1.1) does not hold. If b1 0 < k, then we have x2 = b2 = 0 since b1 contains the k largest absolute elements of b. Otherwise, the fact that x 0 k and x1 = b1 implies x2 = 0, and hence b2. Therefore, we obtain (2.2). When (2.2) happens, however, we in reality have Hk (b) x 2 = b x 2 = 0. In other words, the factor of 2 in (1.1) can essentially be replaced with an arbitrary constant! In this sense, we conclude that the bound (1.1) is not tight. Our new estimate for hard thresholding is as follows: Theorem 1 (Tight Bound for Hard Thresholding) Let b Rd be an arbitrary vector and x Rd be any K-sparse signal. For any k K, we have the following bound: Hk (b) x 2 ν b x 2 , ν = 1 + ρ + p 2 , ρ = min{K, d k} k K + min{K, d k}. In particular, our bound is tight in the sense that there exist specific vectors of b and x such that the equality holds. Remark 2 (Maximum of ν) In contrast to the constant bound (1.1), our result asserts that the deviation resulting from hard thresholding is inversely proportional to k (when K d k) in a universal manner. When k tends to d, ρ is given by (d k)/(d K) which is still decreasing with respect to k. Thus, the maximum value of ρ equals one. Even in this case, we find that νmax = q Remark 3 Though for some batch algorithms such as IHT and Co Sa MP, the constant bound (1.1) suffices to establish the convergence due to specific conditions, we show in Section 4 that it cannot ensure the global convergence for stochastic algorithms. Remark 4 When x is not exactly K-sparse, we still can bound the error by Hk (b) x 2 Hk (b) Hk (x) 2+ Hk (x) x 2. Thus, without loss of generality, we assumed that the signal x is K-sparse. Proof (Sketch) Our bound follows from fully exploring the sparsity pattern of the signals and from fundamental arguments in optimization. Denote w := Hk (b) . SHEN AND LI Let Ωbe the support set of w and let Ωbe its complement. We immediately have PΩ(b) = w. Let Ω be the support set of x. Define b1 = PΩ\Ω (b) , b2 = PΩ Ω (b) , b3 = PΩ\Ω (b) , b4 = PΩ Ω (b) . Likewise, we define xi and wi for 1 i 4. Due to the construction, we have w1 = b1, w2 = b2, w3 = w4 = x1 = x3 = 0. Our goal is to estimate the maximum value of w x 2 2 / b x 2 2. It is easy to show that when attaining the maximum, b3 2 must be zero. Denote γ := w x 2 2 b x 2 2 = b1 2 2 + b2 x2 2 2 + x4 2 2 b1 2 2 + b2 x2 2 2 + b4 x4 2 2 . (2.3) Note that the variables here only involve x and b. Arranging the equation we obtain (γ 1) b2 x2 2 2 + γ b4 x4 2 2 x4 2 2 + (γ 1) b1 2 2 = 0. (2.4) It is evident that for specific choices of b and x, we have γ = 1. Since we are interested in the maximum of γ, we assume γ > 1 below. Fixing b, we can view the left-hand side of the above equation as a function of x. One may verify that the function has a positive definite Hessian matrix and thus it attains the minimum at stationary point given by x 2 = b2, x 4 = γ γ 1b4. (2.5) On the other hand, (2.4) implies that the minimum function value should not be greater than zero. Plugging the stationary point back gives b1 2 2 γ2 (2 b1 2 2 + b4 2 2)γ + b1 2 2 0. Solving the above inequality with respect to γ, we obtain γ 1 + 2 b1 2 2 1 r 4 b1 2 2 + b4 2 2 b4 2 2 To derive an upper bound that is uniform over the choice of b, we recall that b1 contains the largest absolute elements of b while b4 has smaller values. In particular, the average in b1 is larger than that in b4, which gives b4 2 2/ b4 0 b1 2 2/ b1 0. Note that b1 0 = k b2 0 = k (K b4 0). Hence, combining with the fact that 0 b4 0 min{K, d k} and optimizing over b4 0 in the above inequality gives b4 2 2 min{K, d k} k K + min{K, d k} b1 2 2 . (2.7) Finally, we arrive at a uniform upper bound γ 1 + ρ + p 2 , ρ = min{K, d k} k K + min{K, d k}. See Appendix B for the full proof. A TIGHT BOUND OF HARD THRESHOLDING Remark 5 (Tightness) We construct proper vectors b and x to establish the tightness of our bound by a backward induction. Note that γ equals ν if and only if b4 2 2 = ρ b1 2 2. Hence, we pick b4 2 2 = ρ b1 2 2 , x2 = b2, x4 = ν ν 1b4, (2.8) where x2 and x4 are actually chosen as the stationary point as in (2.5). We note that the quantity of ν only depends on d, k and K, not on the components of b or x. Plugging the above back to (2.3) justifies γ = ν. It remains to show that our choices in (2.8) do not violate the definition of bi s, i.e., we need to ensure that the elements in b1 or b2 are equal to or greater than those in b3 or b4. Note that there is no such constraint for the K-sparse vector x. Let us consider the case K < d k and b4 0 = K, so that b1 0 = k and ρ = K/k. Thus, the first equality of (2.8) holds as soon as all the entries of b have same magnitude. The fact b4 0 = K also implies Ω is a subset of Ωdue to the definition of b4 and the sparsity of x, hence we have x2 = 0 = b2. Finally, picking x4 as we did in (2.8) completes the reasoning since it does not violate the sparsity constraint on x. As we pointed out and just verified, the bound given by Theorem 1 is tight. However, if there is additional information for the signals, a better bound can be established. For instance, let us further assume that the signal b is r-sparse. If r k, then b4 is a zero vector and (2.6) reads as γ 1. Otherwise, we have b4 0 min{K, r k} and (2.7) is improved to b4 2 2 min{K, r k} k K + min{K, r k} b1 2 2 . Henceforth, we can show that the parameter ρ is given by ρ = min{K, r k} k K + min{K, r k}. Note that the fact r d implies that the above is a tighter bound than the one in Theorem 1. We would also like to mention that in Lemma 1 of Jain et al. (2014), a closely related bound was established: d k d K b x 2 . (2.9) One may use this nice result to show that Hk (b) x 2 Hk (b) b 2 + b x 2 b x 2 , (2.10) which also improves on (1.1) provided k > K. However, one shortcoming of (2.10) is that the factor depends on the dimension. For comparison, we recall that in the regime K d k, our bound is free of the dimension. This turns out to be a salient feature to integrate hard thresholding into stochastic methods, and we will comment on it more in Section 4. SHEN AND LI 3. Implications to Compressed Sensing In this section, we investigate the implications of Theorem 1 for compressed sensing and signal processing. Since most of the HT-based algorithms utilize the deviation bound (1.1) to derive the convergence condition, they can be improved by our new bound. We exemplify the power of our theorem on two popular algorithms: IHT (Blumensath and Davies, 2009) and Co Sa MP (Needell and Tropp, 2009). We note that our analysis also applies to their extensions such as Bahmani et al. (2013). To be clear, the purpose of this section is not dedicated to improving the best RIP condition for which recovery is possible by any methods (either convex or non-convex). Rather, we focus on two broadly used greedy algorithms and illustrate how our bound improves on previous results. We proceed with a brief review of the problem setting in compressed sensing. Compressed sensing algorithms aim to recover the true K-sparse signal x Rd from a set of its (perhaps noisy) measurements y = Ax + ε, (3.1) where ε Rd is some observation noise and A is a known n d sensing matrix with n d, hence the name compressive sampling. In general, the model is not identifiable since it is an underdetermined system. Yet, the prior knowledge that x is sparse radically changes the premise. That is, if the geometry of the sparse signal is preserved under the action of the sampling matrix A for a restricted set of directions, then it is possible to invert the sampling process. Such a novel idea was quantified as the kth restricted isometry property of A by Cand es and Tao (2005), which requires that there exists a constant δ 0, such that for all k-sparse signals x (1 δ) x 2 2 Ax 2 2 (1 + δ) x 2 2 . (3.2) The kth restricted isometry constant (RIC) δk is then defined as the smallest one that satisfies the above inequalities. Note that δ2k < 1 is the minimum requirement for distinguishing all k-sparse signals from the measurements. This is because for two arbitrary k-sparse vectors x1 and x2 and their respective measurements y1 and y2, the RIP condition reads as (1 δ2k) x1 x2 2 2 y1 y2 2 2 (1 + δ2k) x1 x2 2 2 , for which δ2k < 1 guarantees that x1 = x2 implies y1 = y2. To date, there are three quintessential examples known to exhibit a profound restricted isometry behavior as long as the number of measurements is large enough: Gaussian matrices (optimal RIP, i.e., very small δk), partial Fourier matrices (fast computation) and Bernoulli ensembles (low memory footprint). Notably, it was shown in recent work that random matrices with a heavy-tailed distribution also satisfy the RIP with overwhelming probability (Adamczak et al., 2011; Li et al., 2014). Equipped with the standard RIP condition, many efficient algorithms have been developed. A partial list includes ℓ1-norm based convex programs, IHT, Co Sa MP, SP and regularized OMP (Needell and Vershynin, 2010), along with much interesting work devoted to improving or sharpening the RIP condition (Wang and Shim, 2012; Mo and Shen, 2012; Cai and Zhang, 2013; Mo, 2015). To see why relaxing RIP is of central interest, note that the standard result (Baraniuk et al., 2008) asserts that the RIP condition δk δ holds with high probability over the draw of A provided n C0δ 2k log(d/k). (3.3) Hence, a slight relaxation of the condition δk δ may dramatically decrease the number of measurements. That being said, since the constant C0 above is unknown, in general one cannot tell A TIGHT BOUND OF HARD THRESHOLDING the precise sample size for greedy algorithms. Estimating the constant is actually the theme of phase transition (Donoho and Tanner, 2010; Donoho et al., 2013). While precise phase transition for ℓ1-based convex programs has been well understood (Wainwright, 2009), an analogous result for greedy algorithms remains an open problem. Notably, in Blanchard and Tanner (2015), phase transition for IHT/Co Sa MP was derived using the constant bound (1.1). We believe that our tight bound shall sharpen these results and we leave it as our future work. In the present paper, we focus on the ubiquitous RIP condition. In the language of RIP, we establish improved results. 3.1 Iterative Hard Thresholding The IHT algorithm recovers the underlying K-sparse signal x by iteratively performing a full gradient descent on the least-squares loss followed by a hard thresholding step. That is, IHT starts with an arbitrary point x0 and at the t-th iteration, it updates the new solution as follows: xt = Hk xt 1 + A (y Axt 1) . (3.4) Note that Blumensath and Davies (2009) used the parameter k = K. However, in practice one may only know to an upper bound on the true sparsity K. Thus, we consider the projection sparsity k as a parameter that depends on K. To establish the global convergence with a geometric rate of 0.5, Blumensath and Davies (2009) applied the bound (1.1) and assumed the RIP condition δ2k+K 0.18. (3.5) As we have shown, (1.1) is actually not tight and hence, their results, especially the RIP condition can be improved by Theorem 1. Theorem 6 Consider the model (3.1) and the IHT algorithm (3.4). Pick k K and let {xt}t 1 be the iterates produced by IHT. Then, under the RIP condition δ2k+K 1/ 8ν, for all t 1 xt x 2 0.5t x0 x 2 + C ε 2 , where ν is given by Theorem 1. Let us first study the vanilla case k = K. Blumensath and Davies (2009) required δ3K 0.18 whereas our analysis shows δ3K 0.22 suffices. Note that even a little relaxation on RIP is challenging and may require several pages of mathematical induction (Cand es, 2008; Cai et al., 2010; Foucart, 2012). In contrast, our improvement comes from a direct application of Theorem 1 which only modifies several lines of the original proof in Blumensath and Davies (2009). See Appendix C for details. In view of (3.3), we find that the necessary number of measurements for IHT is dramatically reduced with a factor of 0.67 by our new theorem in that the minimum requirement of n is inversely proportional to the square of δ2k+K. Another important consequence of the theorem is a characterization on the RIP condition and the sparsity parameter, which, to the best of our knowledge, has not been studied in the literature. In Blumensath and Davies (2009), when gradually tuning k larger than K, it always requires δ2k+K 0.18. Note that due to the monotonicity of RIC, i.e., δr δr if r r , the condition turns out to be more and more stringent. Compared to their result, since ν is inversely proportional to k, Theorem 6 is powerful especially when k becomes larger. For example, suppose k = 20K. In this SHEN AND LI case, Theorem 6 justifies that IHT admits the linear convergence as soon as δ41K 0.32 whereas Blumensath and Davies (2009) requires δ41K 0.18. Such a property is appealing in practice, in that among various real-world applications, the true sparsity is indeed unknown and we would like to estimate a conservative upper bound on it. On the other hand, for a given sensing matrix, there does exist a fundamental limit for the maximum choice of k. To be more precise, the condition in Theorem 6 together with the probabilistic argument (3.3) require 8ν δ2k+K, C1ν(2k + K) log (d/(2k + K)) n. Although it could be very interesting to derive a quantitative characterization for the maximum value of k, we argue that it is perhaps intractable owing to two aspects: First, it is known that one has to enumerate all the combinations of the 2k + K columns of A to compute the restricted isometry constant δ2k+K (Bah and Tanner, 2010, 2014). This suggests that it is NP-hard to estimate the largest admissible value of k. Also, there is no analytic solution of the stationary point for the left-hand side of the second inequality. 3.2 Compressive Sampling Matching Pursuit The Co Sa MP algorithm proposed by Needell and Tropp (2009) is one of the most efficient algorithms for sparse recovery. Let F(x) = y Ax 2 2. Co Sa MP starts from an arbitrary initial point x0 and proceeds as follows: Ωt = supp F(xt 1), k supp xt 1 , bt = arg min x F(x), s.t. supp (x) Ωt, xt = Hk bt . Compared to IHT which performs hard thresholding after gradient update, Co Sa MP prunes the gradient at the beginning of each iteration, followed by solving a least-squares program restricted on a small support set. In particular, in the last step, Co Sa MP applies hard thresholding to form a k-sparse iterate for future updates. The analysis of Co Sa MP consists of bounding the estimation error in each step. Owing to Theorem 1, we advance the theoretical result of Co Sa MP by improving the error bound for its last step, and hence the RIP condition. Theorem 7 Consider the model (3.1) and the Co Sa MP algorithm. Pick k K and let {xt}t 1 be the iterates produced by Co Sa MP. Then, under the RIP condition 32ν + 49 9 1/2 it holds that for all t 1 xt x 2 0.5t x0 x 2 + C ε 2 , where ν is given by Theorem 1. A TIGHT BOUND OF HARD THRESHOLDING Roughly speaking, the bound is still inversely proportional to ν. Hence, it is monotonically increasing with respect to k, indicating our theorem is more effective for a large quantity of k. In fact, for the Co Sa MP algorithm, our bound above is superior to the best known result even when k = K. To see this, we have the RIP condition δ4K 0.31. In comparison, Needell and Tropp (2009) derived a bound δ4K 0.1 and Foucart and Rauhut (2013, Theorem 6.27) improved it to δ4K < 0.29 for a geometric rate of 0.5. We notice that for binary sparse vectors, Jain et al. (2014) presented a different proof technique and obtained the RIP condition δ4K 0.35 for Co Sa MP. 4. Hard Thresholding in Large-Scale Optimization Now we move on to the machine learning setting where our focus is pursuing an optimal sparse solution that minimizes a given objective function based on a set of training samples Zn 1 := {Zi}n i=1. Different from compressed sensing, we usually have sufficient samples which means n can be very large. Therefore, the computational complexity is of primary interest. Formally, we are interested in optimizing the following program: min x Rd F(x; Zn 1 ) = 1 i=1 f(x; Zi), s.t. x 0 K, x 2 ω. (4.1) The global optimum of the above problem is denoted by xopt. We note that the objective function is presumed to be decomposable with respect to the samples. This is quite a mild condition and most of the popular machine learning models fulfill it. Typical examples include (but not limited to) the sparse linear regression and sparse logistic regression: Sparse Linear Regression: For all 1 i n, we have Zi = (ai, yi) Rd R and the loss function F(x; Zn 1 ) = 1 2n Ax y 2 2 is the least-squares and can be explained by f(x; Zi) = 1 2 ai x yi 2 2. Sparse Logistic Regression: For all 1 i n, we have Zi = (ai, yi) Rd {+1, 1} and the negative log-likelihood is penalized, i.e., F(x; Zn 1 ) = 1 n Pn i=1 log (1 + exp ( yiai x)) for which f(x; Zi) = log (1 + exp ( yiai x)). To ease notation, we will often write F(x; Zn 1 ) as F(x) and f(x; Zi) as fi(x) for i = 1, 2, , n. It is worth mentioning that the objective function F(x) is allowed to be non-convex. Hence, in order to ensure the existence of a global optimum, a natural option is to impose an ℓp-norm (p 1) constraint (Loh and Wainwright, 2012, 2015). Here we choose the ℓ2-norm constraint owing to its fast projection. Previous work, e.g., Agarwal et al. (2012) prefers the computationally less efficient ℓ1-norm to promote sparsity and to guarantee the existence of optimum. In our problem, yet, we already have imposed the hard sparsity constraint so the ℓ2-norm constraint is a better fit. The major contribution of this section is a computationally efficient algorithm termed hard thresholded stochastic variance reduced gradient method (HT-SVRG) to optimize (4.1), tackling one of the most important problems in large-scale machine learning: producing sparse solutions by stochastic methods. We emphasize that the formulation (4.1) is in stark contrast to the ℓ1-regularized programs considered by previous stochastic solvers such as Prox-SVRG (Xiao and Zhang, 2014) and SAGA (Defazio et al., 2014). We target here a stochastic algorithm for the non-convex problem that is less exploited in the literature. From a theoretical perspective, (4.1) is more difficult to analyze but it always produces sparse solutions, whereas performance guarantees for convex programs SHEN AND LI are fruitful but one cannot characterize the sparsity of the obtained solution (usually the solution is not sparse). When we appeal to stochastic algorithms to solve the convex programs, the ℓ1-norm formulation becomes much less effective in terms of sparsification, naturally owing to the randomness. See Langford et al. (2009); Xiao (2010); Duchi and Singer (2009) for more detailed discussion on the issue. We also remark that existing work such as Yuan et al. (2018); Bahmani et al. (2013); Jain et al. (2014) investigated the sparsity-constrained problem (4.1) in a batch scenario, which is not practical for large-scale learning problems. The perhaps most related work to our new algorithm is Nguyen et al. (2014). Nonetheless, the optimization error therein does not vanish for noisy statistical models. Our main result shows that for prevalent statistical models, our algorithm is able to recover the true parameter with a linear rate. Readers should distinguish the optimal solution xopt and the true parameter. For instance, consider the model (3.1). Minimizing (4.1) does not amount to recovering x if there is observation noise. In fact, the convergence to xopt is only guaranteed to an accuracy reflected by the statistical precision of the problem, i.e., x xopt 2, which is the best one can hope for any statistical model (Agarwal et al., 2012). We find that the global convergence is attributed to both the tight bound and the variance reduction technique to be introduced below, and examining the necessity of them is an interesting future work. Algorithm 1 Hard Thresholded Stochastic Variance Reduced Gradient Method (HT-SVRG) Require: Training samples {Zi}n i=1, maximum stage count S, sparsity parameter k, update frequency m, learning rate η, radius ω, initial solution ex0. Ensure: Optimal solution ex S. 1: for s = 1 to S do 2: Set ex = exs 1, eµ = 1 n Pn i=1 fi(ex), x0 = ex. 3: for t = 1 to m do 4: Uniformly pick it {1, 2, , n} and update the solution bt = xt 1 η fit(xt 1) fit(ex) + eµ , rt = Hk bt , xt = Πω(rt). 6: Uniformly choose js {0, 1, , m 1} and set exs = xjs. 4.1 Algorithm Our algorithm (Algorithm 1) applies the framework of Johnson and Zhang (2013), where the primary idea is to leverage past gradients for the current update for the sake of variance reduction a technique that has a long history in statistics (Owen and Zhou, 2000). To guarantee that each iterate is k-sparse, it then invokes the hard thresholding operation. Note that the orthogonal projection for rt will not change the support set, and hence xt is still k-sparse. Also note that our sparsity constraint in (4.1) reads as x 0 K. What we will show below is that when the parameter k is properly chosen (which depends on K), we obtain a globally convergent sequence of iterates. A TIGHT BOUND OF HARD THRESHOLDING The most challenging part on establishing the global convergence comes from the hard thresholding operation Hk rt . Note that it is bt that reduces the objective value in expectation. If bt is not k-sparse (usually it is dense), xt is not equal to bt so it does not decrease the objective function. In addition, compared with the convex proximal operator (Defazio et al., 2014) which enjoys the non-expansiveness of the distance to the optimum, the hard thresholding step can enlarge the distance up to a multiple of 2 if using the bound (1.1). What makes it a more serious issue is that these inaccurate iterates xt will be used for future updates, and hence the error might be progressively propagated at an exponential rate. Our key idea is to first bound the curvature of the function from below and above to establish RIP-like condition, which, combined with Theorem 1, downscales the deviation resulting from hard thresholding. Note that ν is always greater than one (see Theorem 1), hence the curvature bound is necessary. Due to variance reduction, we show that the optimization error vanishes when restricted on a small set of directions as soon as we have sufficient samples. Moreover, with hard thresholding we are able to control the error per iteration and to obtain near-optimal sample complexity. 4.2 Deterministic Analysis We will first establish a general theorem that characterizes the progress of HT-SVRG for approximating an arbitrary K-sparse signal bx. Then we will discuss how to properly choose the hyperparameters of the algorithm. Finally we move on to specify bx to develop convergence results for a global optimum of (4.1) and for a true parameter (e.g., x of the compressed sensing problem). 4.2.1 ASSUMPTION Our analysis depends on two properties of the curvature of the objective function that have been standard in the literature. Readers may refer to Bickel et al. (2009); Negahban et al. (2009); Jain et al. (2014) for a detailed description. Definition 8 (Restricted Strong Convexity) A differentiable function g : Rd R is said to satisfy the property of restricted strong convexity (RSC) with parameter αr > 0, if for all vectors x, x Rd with x x 0 r, it holds that g(x ) g(x) g(x), x x αr Definition 9 (Restricted Smoothness) A differentiable function g : Rd R is said to satisfy the property of restricted smoothness (RSS) with parameter Lr > 0, if for all vectors x, x Rd with x x 0 r, it holds that g(x ) g(x) 2 Lr x x 2 . With these definitions, we assume the following: (A1) F(x) satisfies the RSC condition with parameter αk+K. (A2) For all 1 i n, fi(x) satisfies the RSS condition with parameter L3k+K. Here, we recall that K was first introduced in (4.1) and the parameter k was used in our algorithm. Compared to the convex algorithms such as SAG (Roux et al., 2012), SVRG (Johnson and Zhang, SHEN AND LI 2013) and SAGA (Defazio et al., 2014) that assume strong convexity and smoothness everywhere, we only assume these in a restricted sense. This is more practical especially in the high dimensional regime where the Hessian matrix could be degenerate (Agarwal et al., 2012). We also stress that the RSS condition is imposed on each fi(x), whereas prior work requires it for F(x) which is milder than ours (Negahban et al., 2009). 4.2.2 UPPER BOUND OF PROGRESS For brevity, let us denote L := L3k+K, α := αk+K, c := L/α, where we call the quantity c as the condition number of the problem. It is also crucial to measure the ℓ2-norm of the gradient restricted on sparse directions, and we write 3k+KF(x) 2 := max Ω PΩ( F(x)) 2 : |Ω| 3k + K . Note that for convex programs, the above evaluated at a global optimum is zero. As will be clear, 3k+KF(x) 2 reflects how close the iterates returned by HT-SVRG can be to the point x. For prevalent statistical models, it vanishes when there are sufficient samples. Related to this quantity, our analysis also involves Q(x) := 16νη2Lωm + 2ω 3k+KF(x) 2 + 4νη2m 3k+KF(x) 2 2 , where we recall that ν is the expansiveness factor given by Theorem 1, η and m are used in the algorithm and ω is a universal constant that upper bounds the ℓ2-norm of the signal we hope to estimate. Virtually, with an appropriate parameter setting, Q(x) scales as 3k+KF(x) 2 which will be clarified. For a particular stage s, we denote Is := {i1, i2, , im}, i.e., the samples randomly chosen for updating the solution. Theorem 10 Consider Algorithm 1 and a K-sparse signal bx of interest. Assume (A1) and (A2). Pick the step size 0 < η < 1/(4L). If ν < 4L/(4L α), then it holds that E F(exs) F(bx) βs F(ex0) F(bx) + τ(bx), where the expectation is taken over {I1, j1, I2, j2, , Is, js} and 0 < β < 1 provided that m is large enough. In particular, for 1/(1 ηα) < ν < 4L/(4L α), we have β = β1 := 1 (2νηα 2νη2αL ν + 1) m + 2νη2αL 2νηα 2νη2αL ν + 1, τ(bx) = τ1(bx) := αQ(bx) 2(2νηα 2νη2αL ν + 1)(1 β1)m. For ν 1/(1 ηα), we have β = β2 := 1 νηα(1 2ηL)m + 2ηL 1 2ηL, τ(bx) = τ2(bx) := Q(bx) 2νηα(1 2ηL)(1 β2)m. A TIGHT BOUND OF HARD THRESHOLDING The proof can be found in Appendix D.1. Remark 11 For the theorem to hold, ν < p 4L/(4L α) p 4/3 1.15 due to L α. Hence, the conventional bound (1.1) is not applicable. In contrast, Theorem 1 asserts that this condition can be fulfilled by tuning k slightly larger than K. Remark 12 With the conditions on η and ν, the coefficient β is always less than one provided that m is sufficiently large. Remark 13 The theorem does not assert convergence to an arbitrary sparse vector bx. This is because F(exs) F(bx) might be less than zero. However, specifying bx does give convergence results, as to be elaborated later. 4.2.3 HYPER-PARAMETER SETTING Before moving on to the convergence guarantee, let us discuss the minimum requirement on the hyper-parameters k, m and η, and determine how to choose them to simplify Theorem 10. For the sake of success of HT-SVRG, we require ν < 4c/(4c 1), which implies ρ < 1/(16c2 4c). Recall that ρ is given in Theorem 1. In general, we are interested in the regime K k d. Hence, we have ρ = K/k and the minimum requirement for the sparsity parameter is k > (16c2 4c)K. (4.2) To our knowledge, the idea of relaxed sparsity was first introduced in Zhang (2011) for OMP and in Jain et al. (2014) for projected gradient descent. However, the relaxed sparsity here emerges in a different way in that HT-SVRG is a stochastic algorithm, and their proof technique cannot be used. We also contrast our tight bound to the inequality (2.10) that is obtained by combining the triangle inequality and Lemma 1 of Jain et al. (2014). Following our proof pipeline, (2.10) gives 4c(4c 1) 1 1 2 d + p 4c(4c 1) 1 1 2 K which grows with the dimension d, whereas using Theorem 1 the sparsity parameter k depends only on the desired sparsity K. In this regard, we conclude that for the stochastic case, our bound is vital. Another component of the algorithm is the update frequency m. Intuitively, HT-SVRG performs m number of stochastic gradient update followed by a full gradient evaluation, in order to mitigate the variance. In this light, m should not be too small. Otherwise, the algorithm reduces to the full gradient method which is not computationally efficient. On the other spectrum, a large m leads to a slow convergence that is reflected in the convergence coefficient β. To quantitatively analyze how m should be selected, let us consider the case ν 1/(1 ηα) for example. The case 1/(1 ηα) < ν < 4L/(4L α) follows in a similar way. In order to ensure β2 < 1, we must have m > 1/ (νηα(1 4ηL)). In particular, picking L , η (0, 1/4), (4.3) we find that the update frequency m has to satisfy m > c νη (1 η ), (4.4) SHEN AND LI which is of the same order as in the convex case (Johnson and Zhang, 2013) when η = Θ(1). Note that the way we choose the learning rate η = η /L is also a common practice in convex optimization (Nesterov, 2004). With (4.2), (4.3) and (4.4) in mind, we provide detailed choices of the hyper-parameters. Due to 0 < η < 1/(4L), β1 is monotonically increasing with respect to ν. By Theorem 1, we know that ν is decreasing with respect to k. Thus, a larger quantity of k results in a smaller value of β1, and hence a faster rate. Interestingly, for β2 we discover that the smaller the k is, the faster the algorithm concentrates. Hence, we have the following consequence: Proposition 14 Fix η and m. Then the optimal choice of ν in Theorem 10 is ν = 1/(1 ηα) in the sense that the convergence coefficient β attains the minimum. In light of the proposition, in the sections to follow, we will only consider the setting ν = 1/(1 ηα). But we emphasize that our analysis and results essentially apply to any ν 4L/(4L α). Now let η = 1 8L, m = 4(8c 1), k = 8c(8c 1)K. (4.5) 3, τ(bx) = 5ω α 3k+KF(bx) 2 + 1 αL 3k+KF(bx) 2 2 . (4.6) 4.2.4 GLOBAL LINEAR CONVERGENCE We are in the position to state the global linear convergence to an optimum of the sparsity-constrained optimization program (4.1). Corollary 15 Assume (A1) and (A2). Consider the HT-SVRG algorithm with hyper-parameters given in (4.5). Then the sequence {exs}s 1 converges linearly to a global optimum xopt of (4.1) E F(exs) F(xopt) 2 s F(ex0) F(xopt) α 3k+KF(xopt) 2 + 1 αL 3k+KF(xopt) 2 2 . Proof This is a direct consequence of Theorem 10. Whenever 3k+KF(xopt) = 0, the corollary reads as E F(exs) F(xopt) 2 s F(ex0) F(xopt) . It implies that if one is solving a convex problem without the sparsity constraint but the optimal solution happens to be sparse, it is safe to perform hard thresholding without loss of optimality. We exemplify such behavior with another algorithm SAGA (Defazio et al., 2014) in Appendix E. In the noiseless compressed sensing setting where y = Ax , the corollary guarantees that HT-SVRG exactly recovers the underlying true signal x when F(x) is chosen as the least-squares loss in that xopt = x and F(x ) = A (Ax y) = 0. On the other side, the RSC property implies that 2 max{F(exs) F(bx), 0} α + 2 k+KF(bx) 2 A TIGHT BOUND OF HARD THRESHOLDING The proof is straightforward and can be found in Lemma 14 of Shen and Li (2017a). Now we specify bx as the true parameter of some statistical model, for instance, x in (3.1). It is hence possible to establish recovery guarantee of x , which is known as the problem of parameter estimation. Corollary 16 Assume (A1) and (A2). Let L be the RSS parameter of F(x) at the sparsity level 3k+K. Consider the HT-SVRG algorithm with hyper-parameters given in (4.5). Then the sequence {exs}s 1 recovers a K-sparse signal x with a geometric rate 2 ex0 x 2 + α2 3k+KF(x ) 2 3k+KF(x ) 2 . The proof can be found in Appendix D.2. Remark 17 The RSS parameter L of F(x) always ranges in [α, L], which is simply by definition. 4.2.5 COMPUTATIONAL COMPLEXITY We compare the computational complexity of HT-SVRG to that of projected gradient descent (PGD) studied in Jain et al. (2014), which is a batch counterpart to HT-SVRG. First, we remark that the analysis of PGD is based on the smoothness parameter L of F(x) at sparsity level 2k + K. We write c = L /α. To achieve a given accuracy ϵ > 0, PGD requires O (c log(1/ϵ)) iterations. Hence the total computational complexity is O (nc d log(1/ϵ)). For HT-SVRG, in view of Corollary 15, the convergence coefficient is a constant. Hence, HT-SVRG needs O (log(1/ϵ)) iterations where we note that the error term 3k+KF(x ) 2 can be made as small as ϵ with sufficient samples (to be clarified in the sequel). In each stage, HT-SVRG computes a full gradient eµ followed by m times stochastic updates. Therefore, the total complexity of HT-SVRG is given by O ((n + c)d log(1/ϵ)) by noting the fact m = O (c). In the scenario c < n(c 1), HT-SVRG significantly improves on PGD in terms of time cost. 4.3 Statistical Results The last ingredient of our theorem is the term τ(bx) which measures how close the iterates could be to a given sparse signal bx. With appropriate hyper-parameter settings, the quantity relies exclusively on 3k+KF(bx) 2, as suggested by (4.6). Thereby, this section is dedicated to characterizing 3k+KF(bx) 2. We will also give examples for which HT-SVRG is computationally more efficient than PGD. For the purpose of a concrete result, we study two problems: sparse linear regression and sparse logistic regression. These are two of the most popular statistical models in the literature and have found a variety of applications in machine learning and statistics (Raskutti et al., 2011). Notably, it is known that similar statistical results can be built for low-rank matrix regression, sparse precision matrix estimation, as suggested in Negahban et al. (2009); Agarwal et al. (2012). 4.3.1 SPARSE LINEAR REGRESSION For sparse linear regression, the observation model is given by y = Ax + ε, x 0 K, x 2 ω, (4.7) SHEN AND LI where A Rn d is the design matrix, y Rn is the response, ε Rn is some noise, and x is the K-sparse true parameter we hope to estimate from the knowledge of A and y. Note that when we have the additional constraint n d, the model above is exactly that of compressed sensing (3.1). In order to (approximately) estimate the parameter, a natural approach is to optimize the following non-convex program: min x F(x) := 1 i=1 yi ai x 2 2 , s.t. x 0 K, x 2 ω. (4.8) For our analysis, we assume the following on the design matrix and the noise: (A3) a1, a2, . . . , an are independent and identically distributed (i.i.d.) Gaussian random vectors N(0, Σ). All the diagonal elements of Σ satisfy Σjj 1. The noise ε is independent of A and its entries are i.i.d. Gaussian random variables N(0, σ2). Proposition 18 Consider the sparse linear regression model (4.7) and the program (4.8). Assume (A3). Then for a sparsity level r, with probability at least 1 exp( C0n), αr = λmin(Σ) C1 r log d n , L r = λmax(Σ) + C2 r log d with probability at least 1 C3r/d Lr = C4r log d; and with probability at least 1 C5/d r F(x ) 2 C6σ n , r F(xopt) 2 L r xopt x 2 + C6σ Above, λmin(Σ) and λmax(Σ) are the minimum and maximum singular values of Σ respectively. We recall that αr and Lr are involved in our assumptions (A1) and (A2), and L r is the RSS parameter of F(x). The estimation for αr, L r and r F(x ) 2 follows from standard results in the literature (Raskutti et al., 2011), while that for Lr follows from Proposition E.1 in Bellec et al. (2016) by noting the fact that bounding Lr amounts to estimating maxi Hr(ai) 2 2. In order to estimate r F(xopt) 2, notice that r F(xopt) 2 r F(xopt) r F(x ) 2 + r F(x ) 2 F(xopt) F(x ) 2 + r F(x ) 2 L r xopt x 2 + r F(x ) 2 , where we use the definition of RSS in the last inequality. Now we let r = 3k + K = const c2K and get α = λmin(Σ) C1 c2K log d n , L = C4c2K log d. Suppose that λmin(Σ) = 2C4(K log d)2 and n = q C1 C4 K log d with q 1. Then our assumptions (A1) and (A2) are met with high probability with α = C4(K log d)2, L = C4(K log d)3, and c = K log d. A TIGHT BOUND OF HARD THRESHOLDING For Corollary 15, as far as F(ex0) F(xopt) , n = C7 (ωσ)2 ϵ 2K log d, E F(exs) F(xopt) ϵ + λmax(Σ) λmin(Σ) xopt x 2 + λmax(Σ) λmin(Σ) xopt x 2 for some accuracy parameter ϵ > 0. This suggests that it is possible for HT-SVRG to approximate a global optimum of (4.1) up to xopt x 2, namely the statistical precision of the problem. Returning to Corollary 16, to guarantee that E exs x 2 ϵ, it suffices to pick s C8 log(ω c /ϵ), n = C8(ωσ)2ϵ 4K log d. Finally, we compare the computational cost to PGD. It is not hard to see that under the same situation λmin(Σ) = 2C4(K log d)2 and n = C1 C4 K log d, L = C4(K log d)3, c = K log d, provided that λmax(Σ) = C4(K log d)3 C2C4 C1 (K log d)2. Thus c < n(c 1), i.e., HT-SVRG is more efficient than PGD. It is also possible to consider other regimes of the covariance matrix and the sample size, though we do not pursue it here. 4.3.2 SPARSE LOGISTIC REGRESSION For sparse logistic regression, the observation model is given by Pr(yi | ai; x ) = 1 1 + exp( yiai x ), x 0 K, x 2 ω, 1 i n, (4.9) where yi is either 0 or 1. It then learns the parameter by minimizing the negative log-likelihood: min x F(x) := 1 i=1 log (1 + exp( yiai x)) , s.t. x 0 K, x 2 ω. (4.10) There is a large body of work showing that the statistical property is rather analogous to that of linear regression. See, for example, Negahban et al. (2009). In fact, the statistical results apply to generalized linear models as well. 4.4 A Concurrent Work After we posted the first version Shen and Li (2016) on ar Xiv, Li et al. (2016) made their work public where a similar algorithm to HT-SVRG was presented. Their theoretical analysis applies to convex objective functions while we allow the function F(x) to be non-convex. We also fully characterize the convergence behavior of the algorithm by showing the trade-off between the sparsity parameter k and the convergence coefficient β (Proposition 14). SHEN AND LI 5. Experiments In this section, we present a comprehensive empirical study for the proposed HT-SVRG algorithm on two tasks: sparse recovery (compressed sensing) and image classification. The experiments on sparse recovery is dedicated to verifying the theoretical results we presented, and we visualize the classification models learned by HT-SVRG to demonstrate the practical efficacy. 5.1 Sparse Recovery To understand the practical behavior of our algorithm as well as to justify the theoretical analysis, we perform experiments on synthetic data. The experimental settings are as follows: Data Generation. The data dimension d is fixed as 256 and we generate an n d Gaussian random sensing matrix A whose entries are i.i.d. with zero mean and variance 1/n. Then 1000 K-sparse signals x are independently generated, where the support of each signal is uniformly chosen. That is, we run our algorithm and the baselines for 1000 trials. The measurements y for each signal x is obtained by y = Ax which is noise free. In this way, we are able to study the convergence rate by plotting the logarithm of the objective value since the optimal objective value is known to be zero. Baselines. We mainly compare with two closely related algorithms: IHT and PGD. Both of them compute the full gradient of the least-squares loss followed by hard thresholding. Yet, PGD is more general, in the sense that it allows the sparsity parameter k to be larger than the true sparsity K (k = K for IHT) and also considers a flexible step size η (η = 1 for IHT). Hence, PGD can be viewed as a batch counterpart to our method HT-SVRG. Evaluation Metric. We say a signal x is successfully recovered by a solution x if x 2 < 10 3. In this way, we can compute the percentage of success over the 1000 trials for each algorithm. Hyper-Parameters. If not specified, we use m = 3n, k = 9K, and S = 10000 for HTSVRG. We also use the heuristic step size η = 2/svds(AA ) for HT-SVRG and PGD, where svds(AA ) returns the largest singular value of the matrix AA . Since for each stage, HT-SVRG computes the full gradient for (2m/n + 1) times, we run the IHT and PGD for (2m/n + 1)S iterations for fair comparison, i.e., all of the algorithms have the same number of full gradient evaluations. 5.1.1 PHASE TRANSITION Our first simulation aims at offering a big picture on the recovery performance. To this end, we vary the number of measurements n from 1 to 256, roughly with a step size 8. We also study the performance with respect to the true sparsity parameter K, which ranges from 1 to 26, roughly with step size 2. The results are illustrated in Figure 1, where a brighter block means a higher percentage of success and the brightest ones indicate exact sparse recovery. It is apparent that PGD and HTSVRG require fewer measurements for an accurate recovery than IHT, possibly due to the flexibility in choosing the sparsity parameter and the step size. We also observe that as a stochastic algorithm, A TIGHT BOUND OF HARD THRESHOLDING HT-SVRG performs comparably to PGD. This suggests that HT-SVRG is an appealing solution to large-scale sparse learning problems in that HT-SVRG is computationally more efficient. #Measurements (n) Sparsity (K) 1 24 72 136 200 256 #Measurements (n) Sparsity (K) 1 24 72 136 200 256 #Measurements (n) Sparsity (K) 1 24 72 136 200 256 Figure 1: Percentage of successful recovery under various sparsity and sample size. The values range from 0 to 100, where a brighter color means a higher percentage of success (the brightest blocks correspond to the value of 100). PGD admits a higher percentage of recovery compared to IHT because it flexibly chooses the step size and sparsity parameter. As a stochastic variant, HT-SVRG performs comparably to the batch counterpart PGD. In Figure 2, we exemplify some of the results obtained from HT-SVRG by plotting two kinds of curves: the success of percentage against the sample size n and that against the signal sparsity K. In this way, one can examine the detailed values and can determine the minimum sample size for a particular sparsity. For instance, the left panel tells that to ensure that 80% percents of the 16-sparse signals are recovered, we have to collect 175 measurements. We can also learn from the right panel that using 232 measurements, any signal whose sparsity is 22 or less can be reliably recovered. 1 50 100 150 200 256 0 #Measurements (n) Percentage of success (%) 1 4 8 12 16 20 26 0 Sparsity (K) Percentage of success (%) Figure 2: Percentage of success of HT-SVRG against the number of measurements (left) and the sparsity (right). Based on the results in Figure 1 and Figure 2, we have an approximate estimation on the minimum requirement of the sample size which ensures accurate (or exact) recovery. Now we are to investigate how many measurements are needed to guarantee a success percentage of 95% and 99%. To this end, for each signal sparsity K, we look for the number of measurements n0 from Figure 1 where 90 percents of success are achieved. Then we carefully enlarge n0 with step size 1 and run the algorithms. The empirical results are recorded in Figure 3, where the circle markers represent the empirical results with different colors indicating different algorithms, e.g., red circle for empirical observation of HT-SVRG. Then we fit these empirical results by linear regression, which are SHEN AND LI plotted as solid or dashed lines. For example, the green line is a fitted model for IHT. We find that n is almost linear with K. Especially, the curve of HT-SVRG is nearly on top of that of PGD, which again verifies HT-SVRG is an attractive alternative to the batch method. 1 5 10 15 20 25 30 10 Sparsity (K) #Measurements (n) HT SVRG regression HT SVRG empirical PGD regression PGD empirical IHT regression IHT empirical 95% Success n = 1.7K*log256 + 33 1 5 10 15 20 25 30 10 Sparsity (K) #Measurements (n) HT SVRG regression HT SVRG empirical PGD regression PGD empirical IHT regression IHT empirical 99% Success n = 1.7K*log256 + 40 Figure 3: Minimum number of measurements to achieve 95% and 99% percentage of success. Red equation indicates the linear regression of HT-SVRG. The markers and curves for HT-SVRG are almost on top of PGD, which again justifies that HT-SVRG is an appealing stochastic alternative to the batch method PGD. 5.1.2 INFLUENCE OF HYPER-PARAMETERS Next, we turn to investigate the influence of the hyper-parameters, i.e., the sparsity parameter k, update frequency m and step size η on the convergence behavior of HT-SVRG. We set the true sparsity K = 4 and collect 100 measurements for each groundtruth signal, i.e., n = 100. Note that the standard setting we employed is k = 9K = 36, m = 3n = 300 and η = 2/svds(AA ) 0.3. Each time we vary one of these parameters while fixing the other two, and the results are plotted in Figure 4. We point out that although the convergence result (Theorem 10) is deterministic, the vanishing optimization error (Proposition 18) is guaranteed under a probabilistic argument. Hence, it is possible that for a specific configuration of parameters, 97% of the signals are exactly recovered but HT-SVRG fails on the remaining, as we have observed in, e.g., Figure 2. Clearly, we are not supposed to average all the results to examine the convergence rate. For our purpose, we set a threshold 95%, that is, we average over the success trials if more than 95% percents of the signals are exactly recovered. Otherwise, we say that the set of parameters cannot ensure convergence and we average over these failure signals which will give an illustration of divergence. The left panel of Figure 4 verifies the condition that k has to be larger than K, while the second panel shows the update frequency m can be reasonably small in the price of a slow convergence rate. Finally, the empirical study demonstrates that our heuristic choice η = 0.3 works well, and when η > 3, the objective value exceeds 10120 within 3 stages (which cannot be depicted in the figure). For very small step sizes, we plot the convergence curve by gradually enlarging the update frequency m in Figure 5. The empirical results agree with Theorem 10 that for any 0 < η < 1/(4L), HT-SVRG converges as soon as m is large enough. A TIGHT BOUND OF HARD THRESHOLDING 1 10 20 30 40 50 Stage Count Objective Value (log) k = 36 28 20 1 200 400 600 800 Stage Count Objective Value (log) 1 10 20 30 40 50 Stage Count Objective Value (log) Figure 4: Convergence of HT-SVRG with different parameters. We have 100 measurements for the 256-dimensional signal where only 4 elements are non-zero. The standard setting is k = 36, m = 300 and η = 0.3. Left: If the sparsity parameter k is not large enough, HTSVRG will not recover the signal. Middle: A small m leads to a frequent full gradient evaluation and hence slow convergence. Right: We observe divergence when η 3. 1 1000 2000 3000 4000 5000 Stage Count Objective Value (log) 0 1 2 3 4 5 Stage Count (x104) Objective Value (log) m = 600 m = 2000 0 1 2 3 4 5 Stage Count (x104) Objective Value (log) Figure 5: Convergence behavior under small step size. We observe that as long as we pick a sufficiently large value for m, HT-SVRG always converges. This is not surprising since our theorem guarantees for any η < 1/(4L), HT-SVRG will converge if m is large enough. Also note that the geometric convergence rate is observed after certain iterations, e.g., for η = 3 10 5, the log(error) decreases linearly after 20 thousands iterations. 5.2 Classification In addition to the application of sparse recovery, we illustrated that HT-SVRG can deal with binary classification by minimizing the sparse logistic regression problem (4.10). Here, we study the performance on a realistic image dataset MNIST1, consisting of 60 thousands training samples and 10 thousands samples for testing. There is one digit on each image of size 28-by-28, hence totally 10 classes. Some of the images are shown in Figure 6. The update frequency m is fixed as m = 3n. We compute the heuristic step size η as in the previous section, i.e., η = 2/svds(AA ) 10 3. Since for the real-world dataset, the true sparsity is actually unknown, we tune the sparsity parameter k and study the performance of the algorithm. First, we visualize five pair-wise models learned by HT-SVRG in Figure 7, where each row is associated with a binary classification task indicated by the two digits at the leading of the row, and the subsequent red-blue figures are used to illustrate the learned models under different spar- 1. http://yann.lecun.com/exdb/mnist/ SHEN AND LI Figure 6: Sample images in the MNIST database. sity parameter. For example, the third colorful figure depicted on the second row corresponds to recognizing a digit is 1 or 7 with the sparsity k = 30. In particular, for each pair, we label the small digit as positive and the large one as negative, and the blue and red pixels are the weights with positive and negative values respectively. Apparently, the models we learned are discriminative. k = 10 100 120 150 20 30 40 50 60 80 Figure 7: Visualization of the models. We visualize 5 models learned by HT-SVRG under different choices of sparsity shown on the top of each column. Note that the feature dimension is 784. From the top row to the bottom row, we illustrate the models of 0 vs 9 , 1 vs 7 , 2 vs 3 , 4 vs 5 and 6 vs 8 , where for each pair, we label the small digit as positive and the large one as negative. The red color represents negative weights while the blue pixels correspond with positive weights. We also quantitatively show the convergence and prediction accuracy curves in Figure 8. Note that here, the y-axis is the objective value F(exs) rather than log(F(exs) F(xopt)), due to the fact that computing the exact optimum of (4.10) is NP-hard. Generally speaking, HT-SVRG converges quite fast and usually attains the minimum of objective value within 20 stages. It is not surprising to see that choosing a large quantity for the sparsity leads to a better (lower) objective value. However, A TIGHT BOUND OF HARD THRESHOLDING in practice a small assignment for the sparsity, e.g., k = 70 facilitates an efficient computation while still suffices to ensure fast convergence and accurate prediction. 10 20 30 40 0 Stage Count Objective Value k = 20 k = 40 k 60 10 20 30 40 0 Stage Count Objective Value 10 20 30 40 Stage Count Objective Value k = 60 k = 40 k = 20 10 20 30 40 0 Stage Count Objective Value k = 40 k = 60 10 20 30 40 0 Stage Count Objective Value 10 30 50 70 90 110 130 150 92 Accuracy (%) 0 vs 9 1 vs 7 2 vs 3 4 vs 5 6 vs 8 Figure 8: Quantitative results on convergence and accuracy. The first 5 figures demonstrate the convergence behavior of HT-SVRG for each binary classification task, where curves with different colors represent the objective value against number of stages under different sparsity k. Generally speaking, HT-SVRG converges within 20 stages which is a very fast rate. The last figure reflects the classification accuracy against the sparsity for all 5 classification tasks, where we find that for a moderate choice, e.g., k = 70, it already guarantees an accurate prediction (we recall the dimension is 784). 6. Conclusion and Open Problems In this paper, we have provided a tight bound on the deviation resulting from the hard thresholding operator, which underlies a vast volume of algorithms developed for sparsity-constrained problems. Our derived bound is universal over all choices of parameters and we have proved that it cannot be improved without further information on the signals. We have discussed the implications of our result to the community of compressed sensing and machine learning, and have demonstrated that the theoretical results of a number of popular algorithms in the literature can be advanced. In addition, we have devised a novel algorithm which tackles the problem of sparse learning in largescale setting. We have elaborated that our algorithm is guaranteed to produce global optimal solution for prevalent statistical models only when it is equipped with the tight bound, hence justifying that the conventional bound is not applicable in the challenging scenario. There are several interesting open problems. The first question to ask is whether one can establish sharp RIP condition or sharp phase transition for hard thresholding based algorithms such as IHT and Co Sa MP with the tight bound. Moreover, compared to the hard thresholded SGD method (Nguyen et al., 2014), HT-SVRG admits a vanishing optimization error. This poses a SHEN AND LI question of whether we are able to provably show the necessity of variance reduction for such a sparsity-constrained problem. Acknowledgments We would like to thank Jing Wang for insightful discussion since the early stage of the work. We also thank Martin Slawski for helpful discussion on the statistical precision of the problem, and thank Jian Wang for bringing the paper Nguyen et al. (2014) into our attention. We appreciate Huan Xu s high level comments on the work. Finally, we thank the anonymous reviewers for a careful check on our proof and for the encouraging comments. The work was partially funded by NSF-Bigdata-1419210 and NSF-III-1360971. Appendix A. Technical Lemmas We present some useful lemmas that will be invoked by subsequent analysis. The following is a characterization of the co-coercivity of the objective function F(x). A similar result was obtained in Nguyen et al. (2014) but we present a refined analysis which is essential for our purpose. Lemma 19 For a given support set Ω, assume that the continuous function F(x) is L|Ω|-RSS and is αK-RSC for some sparsity level K. Then, for all vectors x and x with |supp (x x ) Ω| K, ΩF(x ) ΩF(x) 2 2 2L|Ω| F(x ) F(x) F(x), x x . Proof We define an auxiliary function G(w) := F(w) F(x), w . For all vectors w and w , we have G(w) G(w ) 2 = F(w) F(w ) 2 L|supp(w w )| w w 2 , which is equivalent to G(w) G(w ) G(w ), w w Lr w w 2 2 , (A.1) where r := |supp (w w )|. On the other hand, due to the RSC property of F(x), we obtain G(w) G(x) = F(w) F(x) F(x), w x α|supp(w x)| 2 w x 2 2 0, provided that |supp (w x)| K. For the given support set Ω, we pick w = x 1 L|Ω| ΩG(x ). Clearly, for such a choice of w, we have supp (w x) = supp (x x ) Ω. Hence, by assuming that |supp (x x ) Ω| is not larger than K, we get G(x) G x 1 L|Ω| ΩG(x ) G(x ) + G(x ), 1 L|Ω| ΩG(x ) + 1 2L|Ω| = G(x ) 1 2L|Ω| ΩG(x ) 2 2 , A TIGHT BOUND OF HARD THRESHOLDING where the second inequality follows from (A.1). Now expanding ΩG(x ) and rearranging the terms gives the desired result. Lemma 20 Consider the HT-SVRG algorithm for a fixed stage s. Let bx be the target sparse vector. Let Ωbe a support set such that supp xt 1 supp (ex) supp (bx) Ω. Put r = |Ω|. Assume (A2). For all 1 t m , denote vt = fit(xt 1) fit(ex) + eµ. Then we have the following: Eit|xt 1 h PΩ vt 2 2 i 4Lr F(xt 1) F(bx) + 4Lr [F(ex) F(bx)] 4Lr F(bx), xt 1 + ex 2bx + 4 PΩ( F(bx)) 2 2 . Proof We have PΩ vt 2 2 = PΩ fit(xt 1) fit(ex) + eµ 2 2 2 PΩ fit(xt 1) fit(bx) 2 2 + 2 PΩ( fit(ex) fit(bx) eµ) 2 2 = 2 PΩ fit(xt 1) fit(bx) 2 2 + 2 PΩ( fit(ex) fit(bx)) 2 2 + 2 PΩ(eµ) 2 2 4 PΩ( fit(ex) fit(bx)) , PΩ(eµ) ξ1= 2 PΩ fit(xt 1) fit(bx) 2 2 + 2 PΩ( fit(ex) fit(bx)) 2 2 + 2 PΩ(eµ) 2 2 4 fit(ex) fit(bx), PΩ(eµ) ξ2 4Lr fit(xt 1) fit(bx) fit(bx), xt 1 bx + 4Lr [fit(ex) fit(bx) fit(bx), ex bx ] + 2 PΩ(eµ) 2 2 4 fit(ex) fit(bx), PΩ(eµ) , where ξ1 is by algebra, ξ2 applies Lemma 19 and the fact that |Ω| = r. Taking the conditional expectation, we obtain the following: Eit|xt 1 h PΩ vt 2 2 4Lr F(xt 1) F(bx) + 4Lr [F(ex) F(bx)] 4Lr F(bx), xt 1 + ex 2bx + 2 2PΩ( F(bx)) PΩ(eµ) , PΩ(eµ) = 4Lr F(xt 1) F(bx) + 4Lr [F(ex) F(bx)] 4Lr F(bx), xt 1 + ex 2bx + 2PΩ( F(bx)) 2 2 2PΩ( F(bx)) PΩ(eµ) 2 2 PΩ(eµ) 2 2 4Lr F(xt 1) F(bx) + 4Lr [F(ex) F(bx)] 4Lr F(bx), xt 1 + ex 2bx + 4 PΩ( F(bx)) 2 2 . The proof is complete. Corollary 21 Assume the same conditions as in Lemma 20. If F(bx) = 0, we have Eit|xt 1 h PΩ vt 2 2 i 4Lr F(xt 1) + F(ex) 2F(bx) . SHEN AND LI Appendix B. Proofs for Section 2 B.1 Proof of Theorem 1 Proof The result is true for the trivial case that b is a zero vector. In the following, we assume that b is not a zero vector. Denote w := Hk (b) . Let Ωbe the support set of w and let Ωbe its complement. We immediately have PΩ(b) = w. Let Ω be the support set of x. For the sake of simplicity, let us split the vector b as follows: b1 = PΩ\Ω (b) , b2 = PΩ Ω (b) , b3 = PΩ\Ω (b) , b4 = PΩ Ω (b) . Likewise, we denote w1 = PΩ\Ω (w) , w2 = PΩ Ω (w) , w3 = PΩ\Ω (w) = 0, w4 = PΩ Ω (w) = 0, x1 = PΩ\Ω (x) = 0, x2 = PΩ Ω (x) , x3 = PΩ\Ω (x) = 0, x4 = PΩ Ω (x) . Due to the hard thresholding, we have w1 = b1, w2 = b2. In this way, by simple algebra we have w x 2 2 = b1 2 2 + b2 x2 2 2 + x4 2 2 , b x 2 2 = b1 2 2 + b2 x2 2 2 + b3 2 2 + b4 x4 2 2 . Our goal is to estimate the maximum of w x 2 2 / b x 2 2. It is easy to show that when attaining the maximum value, b3 2 must be zero since otherwise one may decrease this term to make the objective larger. Hence, maximizing w x 2 2 / b x 2 2 amounts to estimating the upper bound of the following over all choices of x and b: γ := b1 2 2 + b2 x2 2 2 + x4 2 2 b1 2 2 + b2 x2 2 2 + b4 x4 2 2 . (B.1) Firstly, we consider the case of b1 2 = 0, which means Ω= Ω implying γ = 1. In the following, we consider b1 2 = 0. In particular, we consider γ > 1 since we are interested in the maximum value of γ. Arranging (B.1) we obtain (γ 1) b2 x2 2 2 + γ b4 x4 2 2 x4 2 2 + (γ 1) b1 2 2 = 0. (B.2) Let us fix b and define the function G(x2, x4) = (γ 1) b2 x2 2 2 + γ b4 x4 2 2 x4 2 2 + (γ 1) b1 2 2 . A TIGHT BOUND OF HARD THRESHOLDING Thus, (B.2) indicates that G(x2, x4) can attain the objective value of zero. Note that G(x2, x4) is a quadratic function and its gradient and Hessian matrix can be computed as follows: x2 G(x2, x4) = 2(γ 1)(x2 b2), x4 G(x2, x4) = 2γ(x4 b4) 2x4, 2G(x2, x4) = 2(γ 1)I, where I is the identity matrix. Since the Hessian matrix is positive definite, G(x2, x4) attains the global minimum at the stationary point, which is given by x 2 = b2, x 4 = γ γ 1b4, resulting in the minimum objective value G(x 2, x 4) = γ 1 γ b4 2 2 + (γ 1) b1 2 2 . In order to guarantee the feasible set of (B.2) is non-empty, we require that G(x 2, x 4) 0, implying b1 2 2 γ2 (2 b1 2 2 + b4 2 2)γ + b1 2 2 0. Solving the above inequality with respect to γ, we obtain γ 1 + b4 2 2 + r 4 b1 2 2 + b4 2 2 b4 2 2 2 b1 2 2 . (B.3) To derive an upper bound that is uniform over the choice of b, we recall that b1 contains the largest absolute elements of b while b4 has smaller values. In particular, the averaged value of b4 is no greater than that of b1 in magnitude, i.e., b4 2 2 b4 0 b1 2 2 b1 0 . Note that b1 0 = k b2 0 = k (K b4 0). Hence, combining with the fact that 0 b4 0 min{K, d k} and optimizing over b4 0 gives b4 2 2 min{K, d k} k K + min{K, d k} b1 2 2 . Plugging back to (B.3), we finally obtain γ 1 + ρ + p 2 , ρ = min{K, d k} k K + min{K, d k}. The proof is complete. SHEN AND LI Appendix C. Proofs for Section 3 C.1 Proof of Theorem 6 We follow the proof pipeline of Blumensath and Davies (2009) and only remark the difference of our proof and theirs, i.e., where Theorem 1 applies. In case of possible confusion due to notation, we follow the symbols in Blumensath and Davies. One may refer to that article for a complete proof. The first difference occurs in Eq. (22) of Blumensath and Davies (2009), where they reached (Old) xs x[n+1] 2 2 xs Bn+1 a[n+1] Bn+1 2 , while Theorem 1 gives (New) xs x[n+1] 2 ν xs Bn+1 a[n+1] Bn+1 2 . Combining this new inequality and Eq. (23) therein, we obtain xs x[n+1] 2 ν (I Φ Bn+1ΦBn+1)r[n] Bn+1 2+ ν (Φ Bn+1ΦBn+1\Bn+1)r[n] Bn+1\Bn+1 2 . By noting the fact that Bn Bn+1 2s+s where s denotes the sparsity of the global optimum and following their reasoning of Eq. (24) and (25), we have a new bound for Eq. (26): (New) r[n+1] 2 2νδ2s+s r[n] 2 + p (1 + δs+s )ν e 2 . Now our result follows by setting the coefficient of r[n] 2 to 0.5. Note that specifying ν = 4 gives the result of Blumensath and Davies (2009). C.2 Proof of Theorem 7 We follow the proof technique of Theorem 6.27 in Foucart and Rauhut (2013) which gives the best known RIP condition for the Co Sa MP algorithm to date. Since most of the reasoning is similar, we only point out the difference of our proof and theirs, i.e., where Theorem 1 applies. In case of confusion by notation, we follow the symbols used in Foucart and Rauhut (2013). The reader may refer to that book for a complete proof. The first difference is in Eq. (6.49) of Foucart and Rauhut (2013). Note that to derive this inequality, Foucart and Rauhut invoked the conventional bound (1.1), which gives (Old) x S xn+1 2 2 (x S un+1)Un+1 2 2 + 4 (x S un+1)Un+1 2 2 , while utilizing Theorem 1 gives (New) x S xn+1 2 2 (x S un+1)Un+1 2 2 + ν (x S un+1)Un+1 2 2 . A TIGHT BOUND OF HARD THRESHOLDING Combining this new inequality with Eq. (6.50) and Eq. (6.51) therein, we obtain 1 + (ν 1)δ2 3s+s 1 δ2 3s+s xn x S 2 1 + (ν 1)δ2 3s+s 1 δ2 3s+s (A e )(S Sn) T n+1 2 + 2 1 δ3s+s (A e )Un+1 2 , where s denotes the sparsity of the optimum. Our new bound follows by setting the coefficient of xn x S 2 to 0.5 and solving the resultant equation. Note that setting ν = 4 gives the old bound of Foucart and Rauhut. Appendix D. Proofs for Section 4 D.1 Proof of Theorem 10 Proof Fix a stage s. Let us denote vt = fit(xt 1) fit(ex) + eµ, bt = xt 1 ηvt. By specifying Ω= supp xt 1 supp xt supp (ex) supp (bx), it follows that rt = Hk bt = Hk PΩ bt . Thus, the Euclidean distance of xt and bx can be bounded as follows: xt bx 2 2 rt bx 2 2 = Hk PΩ bt bx 2 2 ν PΩ bt bx 2 2 , (D.1) where the first inequality holds because xt = Πω(rt) and bx 2 ω. We also have PΩ bt bx 2 2 = xt 1 bx ηPΩ vt 2 2 = xt 1 bx 2 2 + η2 PΩ vt 2 2 2η xt 1 bx, vt , where the second equality uses the fact that xt 1 bx, PΩ vt = xt 1 bx, vt . The first term will be preserved for mathematical induction. The third term is easy to manipulate thanks to the unbiasedness of vt. For the second term, we use Lemma 20 to bound it. Put them together, SHEN AND LI conditioning on xt 1 and taking the expectation over it for (D.1), we have Eit|xt 1 h xt bx 2 2 ξ1 ν xt 1 bx 2 2 + 4νη2L F(xt 1) F(bx) + F(ex) F(bx) 2νη xt 1 bx, F(xt 1) 4νη2L F(bx), xt 1 + ex 2bx + 4νη2 PΩ( F(bx)) 2 2 ξ2 ν(1 ηα) xt 1 bx 2 2 2νη(1 2ηL) F(xt 1) F(bx) + 4νη2L [F(ex) F(bx)] + 4νη2L PΩ( F(bx)) 2 xt 1 + ex 2bx 2 + 4νη2 PΩ( F(bx)) 2 2 ν(1 ηα) xt 1 bx 2 2 2νη(1 2ηL) F(xt 1) F(bx) + 4νη2L [F(ex) F(bx)] + 4νη2Q (4Lω + Q ) where ξ1 applies Lemma 20, ξ2 applies Assumption (A1) and we write Q := 3k+KF(bx) 2 for brevity. Now summing over the inequalities over t = 1, 2, , m, conditioning on ex and taking the expectation with respect to Is = {i1, i2, , im}, we have EIs|ex h xm bx 2 2 i [ν(1 ηα) 1] EIs|ex xt 1 bx 2 2 + x0 bx 2 2 + 4νη2Q (4Lω + Q )m 2νη(1 2ηL)EIs|ex F(xt 1) F(bx) + 4νη2Lm [F(ex) F(bx)] = [ν(1 ηα) 1] m EIs,js|ex exs bx 2 2 + ex bx 2 2 + 4νη2Q (4Lω + Q )m 2νη(1 2ηL)m EIs,js|ex [F(exs) F(bx)] + 4νη2Lm [F(ex) F(bx)] [ν(1 ηα) 1] m EIs,js|ex exs bx 2 2 + 2 α + 4νη2Lm [F(ex) F(bx)] 2νη(1 2ηL)m EIs,js|ex [F(exs) F(bx)] + 4νη2Q (4Lω + Q )m + 2Q ω/α, (D.2) where we recall that js is the randomly chosen index used to determine exs (see Algorithm 1). The last inequality holds due to the RSC condition and xt 2 ω. For brevity, we write Q := 4νη2Q (4Lω + Q )m + 2Q ω/α, Q = 3k+KF(bx) 2 . Based on (D.2), we discuss two cases to examine the convergence of the algorithm. Case 1. ν(1 ηα) 1. This immediately results in EIs|ex h xm bx 2 2 i α + 4νη2Lm [F(ex) F(bx)] 2νη(1 2ηL)m EIs,js|ex [F(exs) F(bx)] + Q, A TIGHT BOUND OF HARD THRESHOLDING which implies νη(1 2ηL)m EIs,js|ex [F(exs) F(bx)] 1 α + 2νη2Lm [F(ex) F(bx)] + Q Pick η such that 1 2ηL > 0, (D.3) EIs,js|ex [F(exs) F(bx)] 1 νηα(1 2ηL)m + 2ηL 1 2ηL [F(ex) F(bx)]+ Q 2νηα(1 2ηL)m. To guarantee the convergence, we must impose 2ηL 1 2ηL < 1. (D.4) Putting (D.3), (D.4) and ν(1 ηα) 1 together gives 4L, ν 1 1 ηα. (D.5) The convergence coefficient here is β = 1 νηα(1 2ηL)m + 2ηL 1 2ηL. (D.6) Thus, we have E [F(exs) F(bx)] βs h F(ex0) F(bx) i + Q 2νηα(1 2ηL)(1 β)m, where the expectation is taken over {I1, j1, I2, j2, , Is, js}. Case 2. ν(1 ηα) > 1. In this case, (D.2) implies EIs|ex h xm bx 2 2 i 2 α + 4νη2Lm [F(ex) F(bx)] + Q α [ν(1 ηα) 1] m 2νη(1 2ηL)m EIs,js|ex [F(exs) F(bx)] . Rearranging the terms gives 2νηα 2νη2αL ν + 1 m EIs,js|ex [F(exs) F(bx)] 1 + 2νη2αLm [F(ex) F(bx)]+αQ To ensure the convergence, the minimum requirements are 2νηα 2νη2αL ν + 1 > 0, 2νηα 2νη2αL ν + 1 > 2νη2αL. SHEN AND LI That is, 4ναLη2 2ναη + ν 1 < 0. We need to guarantee the feasible set of the above inequality is non-empty for the positive variable η. Thus, we require 4ν2α2 4 4ναL(ν 1) > 0, which is equivalent to ν < 4L 4L α. Combining it with ν(1 ηα) > 1 gives 1 1 ηα < ν < 4L 4L α. To ensure the above feasible set is non-empty, we impose 1 1 ηα < 4L 4L α, 4L, 1 1 ηα < ν < 4L 4L α. (D.7) The convergence coefficient for this case is β = 1 (2νηα 2νη2αL ν + 1) m + 2νη2αL 2νηα 2νη2αL ν + 1. (D.8) E [F(exs) F(bx)] βs h F(ex0) F(bx) i + αQ 2(2νηα 2νη2αL ν + 1)(1 β)m. By combining (D.5) and (D.7), the minimum requirement for η and ν is 4L, ν < 4L 4L α. The proof is complete. D.2 Proof of Corollary 16 Proof By noting the concavity of the square root function, we have max{F(exs) F(bx), 0} i q E max{F(exs) F(bx), 0} (2/3)s max{F(ex0) F(bx), 0} + τ(bx). A TIGHT BOUND OF HARD THRESHOLDING Suppose that F(x) satisfies RSS with parameter L [α, L]. It follows that F(ex0) F(bx) D F(bx), ex0 bx E + L 2 1 2L k+KF(bx) 2 2 + L ex0 bx 2 Recall that α 3k+KF(bx) 2 + 1 αL 3k+KF(bx) 2 2 . Hence using a + b + c + d a + max{F(exs) F(bx), 0} i 2 ex0 bx 2 + α 3k+KF(bx) 2 3k+KF(bx) 2 . Finally, the RSC property immediately suggests that (see, e.g., Lemma 20 in Shen and Li (2017b)) max{F(exs) F(bx), 0} i + 2 k+KF(bx) 2 2 ex0 bx 2 + α2 3k+KF(bx) 2 3k+KF(bx) 2 . The proof is complete. Appendix E. HT-SAGA We demonstrate that the hard thresholding step can be integrated into SAGA (Defazio et al., 2014) as shown in Algorithm 2. Note that the only difference of Algorithm 2 and the one proposed in Defazio et al. (2014) is that we perform hard thresholding rather than proximal operator. Hence, our algorithm guarantees k-sparse solution. Theorem 22 Assume the same conditions as in Defazio et al. (2014). Further assume the optimum of (4.1) without the sparsity constraint happens to be k-sparse. Then, the sequence of the solutions produced by Algorithm 2 converges to the optimum with geometric rate for some properly chosen sparsity parameter k. Proof Define the Lyapunov function Z as follows: Zt := Z(xt, {φt i}) = 1 i=1 fi(φt i) F(bx) 1 fi(bx), φt i bx + c xt bx 2 2 . SHEN AND LI Algorithm 2 SAGA with Hard Thresholding (HT-SAGA) Require: The current iterate xt and of each fi(φt i) at the end of iteration t, the step size η. Ensure: The new iterate. 1: Pick j {1, 2, , n} uniformly at random. 2: Take φt+1 j = xt and store fj(φt+1 j ) in the table. All other entries in the table remain unchanged. 3: Update the new iterate xt+1 as follows: bt+1 = xt η fj(φt+1 j ) fj(φt j) + 1 i=1 fi(φt i) xt+1 = Hk bt+1 . We examine Zt+1. We have i fi(φt+1 i ) n F(xt) + 1 1 i fi(φt i), fi(bx), φt+1 i bx # n F(bx), xt bx fi(bx), φt i bx . c xt+1 bx 2 2 cν bt+1 bx 2 2 = cν bt+1 bx + η F(bx) 2 2 . For the first term, we have cν E bt+1 bx + η F(bx) 2 2 cν(1 ηα) xt bx 2 2 + cν (1 + µ)η2 η E fj(xt) fj(bx) 2 2 L F(xt) F(bx) F(bx), xt bx cνη2µ F(xt) F(bx) 2 2 + 2cν(1 + µ 1)η2L i fi(φt i) F(bx) 1 fi(bx), φt i bx # n 2cνη(L α) L 2cνη2αµ F(xt) F(bx) F(bx), xt bx κ + 2cν(1 + µ 1)η2L 1 i fi(φt i) F(bx) 1 fi(bx), φt i bx # κ cνηα xt bx 2 2 + (1 + µ)η 1 cνη E fj(xt) fj(bx) 2 2 . A TIGHT BOUND OF HARD THRESHOLDING In order to guarantee the convergence, we choose proper values for η, c, κ, µ and ν such that the terms in round brackets are non-positive. That is, we require c κ cνηα 0, 1 n 2cνη(L α) L 2cνη2αµ 0, 1 κ + 2cν(1 + µ 1)η2L 1 η = 1 2(αn + L), µ = 2αn + L we fulfill the first two inequalities. Pick c = 1 2η(1 ηα)n. Then by the last two equalities, we require 1 ηα ν (1 ηα)L ηα(1 ηα)Ln + 1. On the other hand, by Theorem 1, we have Thus, we require 1 < ν (1 ηα)L ηα(1 ηα)Ln + 1, By algebra, the above inequalities has non-empty feasible set provided that (6α2 8α2L)n2 + (14αL α 16αL2)n + 8L2(1 L) < 0. Due to α L, we know 224L3 + 1 2α(8L 6) suffices where we assume L > 3/4. Picking ν = (1 ηα)L ηα(1 ηα)Ln + 1 completes the proof. SHEN AND LI Radosaw Adamczak, Alexander E. Litvak, Alain Pajor, and Nicole Tomczak-Jaegermann. Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling. Constructive Approximation, 34(1):61 88, 2011. Alekh Agarwal, Sahand Negahban, and Martin J. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics, 40(5):2452 2482, 2012. Bubacarr Bah and Jared Tanner. Improved bounds on restricted isometry constants for gaussian matrices. SIAM Journal on Matrix Analysis Applications, 31(5):2882 2898, 2010. Bubacarr Bah and Jared Tanner. Bounds of restricted isometry constants in extreme asymptotics: Formulae for Gaussian matrices. Linear Algebra and its Applications, 441:88 109, 2014. Sohail Bahmani, Bhiksha Raj, and Petros T. Boufounos. Greedy sparsity-constrained optimization. Journal of Machine Learning Research, 14(1):807 841, 2013. Richard Baraniuk, Mark Davenport, Ronald De Vore, and Michael B. Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253 263, 2008. Pierre C. Bellec, Guillaume Lecu e, and Alexandre B. Tsybakov. Slope meets lasso: improved oracle bounds and optimality. Co RR, abs/1605.08651, 2016. Peter J. Bickel, Ya acov Ritov, and Alexandre B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, pages 1705 1732, 2009. Jeffrey D. Blanchard and Jared Tanner. Performance comparisons of greedy algorithms in compressed sensing. Numerical Linear Algebra with Applications, 22(2):254 282, 2015. Thomas Blumensath and Mike E. Davies. Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14(5-6):629 654, 2008. Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265 274, 2009. Tony T. Cai and Anru Zhang. Sharp RIP bound for sparse signal and low-rank matrix recovery. Applied and Computational Harmonic Analysis, 35(1):74 93, 2013. Tony T. Cai, Lie Wang, and Guangwu Xu. New bounds for restricted isometry constants. IEEE Transactions on Information Theory, 56(9):4388 4394, 2010. Emmanuel J. Cand es. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9):589 592, 2008. Emmanuel J. Cand es and Terence Tao. Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):4203 4215, 2005. A TIGHT BOUND OF HARD THRESHOLDING Emmanuel J. Cand es and Terence Tao. The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics, 35(6):2313 2351, 2007. Emmanuel J. Cand es and Michael B. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2):21 30, 2008. Emmanuel J. Cand es, Justin K. Romberg, and Terence Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489 509, 2006. Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, and Alan S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805 849, 2012. Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33 61, 1998. Wei Dai and Olgica Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory, 55(5):2230 2249, 2009. Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11):1413 1457, 2004. Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, pages 1646 1654, 2014. David L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289 1306, 2006. David L. Donoho and Jared Tanner. Precise undersampling theorems. Proceedings of the IEEE, 98 (6):913 924, 2010. David L. Donoho and Yaakov Tsaig. Fast solution of ℓ1-norm minimization problems when the solution may be sparse. IEEE Transactions on Information Theory, 54(11):4789 4812, 2008. David L. Donoho, Michael Elad, and Vladimir N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52(1): 6 18, 2006. David L. Donoho, Iain Johnstone, and Andrea Montanari. Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising. IEEE Transactions on Information Theory, 59(6):3396 3433, 2013. John C. Duchi and Yoram Singer. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10:2899 2934, 2009. Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. The Annals of statistics, 32(2):407 499, 2004. SHEN AND LI Simon Foucart. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6):2543 2563, 2011. Simon Foucart. Sparse recovery algorithms: Sufficient conditions in terms of restricted isometry constants. In Approximation Theory XIII: San Antonio 2010, pages 65 77. Springer, New York, NY, 2012. Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkh auser, 2013. Prateek Jain, Ambuj Tewari, and Inderjit S. Dhillon. Orthogonal matching pursuit with replacement. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pages 1215 1223, 2011. Prateek Jain, Ambuj Tewari, and Purushottam Kar. On iterative hard thresholding methods for highdimensional M-estimation. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, pages 685 693, 2014. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems, pages 315 323, 2013. John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10:777 801, 2009. Ping Li, Cun-Hui Zhang, and Tong Zhang. Compressed counting meets compressed sensing. In Proceedings of The 27th Conference on Learning Theory, pages 1058 1077, 2014. Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt. Stochastic variance reduced optimization for nonconvex sparse learning. Co RR, abs/1605.02711, 2016. Po-Ling Loh and Martin J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. The Annals of Statistics, 40(3):1637 1664, 2012. Po-Ling Loh and Martin J. Wainwright. Regularized m-estimators with nonconvexity: statistical and algorithmic theory for local optima. Journal of Machine Learning Research, 16:559 616, 2015. Zongming Ma. Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41(2):772 801, 2013. Qun Mo. A sharp restricted isometry constant bound of orthogonal matching pursuit. Co RR, abs/1501.01708, 2015. Qun Mo and Yi Shen. A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(6):3654 3656, 2012. Deanna Needell and Joel A. Tropp. Co Sa MP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301 321, 2009. A TIGHT BOUND OF HARD THRESHOLDING Deanna Needell and Roman Vershynin. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE Journal of Selected Topics in Signal Processing, 4(2):310 316, 2010. Sahand Negahban, Pradeep Ravikumar, Martin J. Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, pages 1348 1356, 2009. Yurii Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87 of Applied Optimization. Springer US, 2004. Nam H. Nguyen, Deanna Needell, and Tina Woolf. Linear convergence of stochastic iterative greedy algorithms with sparse constraints. Co RR, abs/1407.0088, 2014. Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research, 37(23):3311 3325, 1997. Art Owen and Yi Zhou. Safe and effective importance sampling. Journal of the American Statistical Association, 95(449):135 143, 2000. Yagyensh C. Pati, Ramin Rezaiifar, and Perinkulam S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Conference Record of The 27th Asilomar Conference on Signals, Systems and Computers, pages 40 44, 1993. Garvesh Raskutti, Martin J. Wainwright, and Bin Yu. Minimax rates of estimation for highdimensional linear regression over ℓq-balls. IEEE Transactions on Information Theory, 57(10): 6976 6994, 2011. Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, pages 2672 2680, 2012. Jie Shen and Ping Li. A tight bound of hard thresholding. Co RR, abs/1605.01656, 2016. Jie Shen and Ping Li. On the iteration complexity of support recovery via hard thresholding pursuit. In Proceedings of the 34th International Conference on Machine Learning, pages 3115 3124, 2017a. Jie Shen and Ping Li. Partial hard thresholding: Towards A principled analysis of support recovery. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems, pages 3127 3137, 2017b. Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267 288, 1996. Joel A. Tropp. Greed is good: algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231 2242, 2004. Joel A. Tropp and Anna C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12):4655 4666, 2007. SHEN AND LI Joel A. Tropp and Stephen J. Wright. Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6):948 958, 2010. Martin J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55(5): 2183 2202, 2009. Jian Wang and Byonghyo Shim. On the recovery limit of sparse signals using orthogonal matching pursuit. IEEE Transactions on Signal Processing, 60(9):4973 4976, 2012. Lin Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543 2596, 2010. Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057 2075, 2014. Xiao-Tong Yuan and Tong Zhang. Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14(1):899 925, 2013. Xiao-Tong Yuan, Ping Li, and Tong Zhang. Gradient hard thresholding pursuit. Journal of Machine Learning Research, 18(166):1 43, 2018. Tong Zhang. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Transactions on Information Theory, 57(9):6215 6221, 2011.