# binary_embeddings_with_structured_hashed_projections__1b0ea7de.pdf Binary embeddings with structured hashed projections Anna Choromanska1 ACHOROMA@CIMS.NYU.EDU Krzysztof Choromanski1 KCHORO@GOOGLE.COM Mariusz Bojarski MBOJARSKI@NVIDIA.COM Tony Jebara JEBARA@CS.COLUMBIA.EDU Sanjiv Kumar SANJIVK@GOOGLE.COM Yann Le Cun YANN@CS.NYU.EDU We consider the hashing mechanism for constructing binary embeddings, that involves pseudo-random projections followed by nonlinear (sign function) mappings. The pseudorandom projection is described by a matrix, where not all entries are independent random variables but instead a fixed budget of randomness is distributed across the matrix. Such matrices can be efficiently stored in sub-quadratic or even linear space, provide reduction in randomness usage (i.e. number of required random values), and very often lead to computational speed ups. We prove several theoretical results showing that projections via various structured matrices followed by nonlinear mappings accurately preserve the angular distance between input highdimensional vectors. To the best of our knowledge, these results are the first that give theoretical ground for the use of general structured matrices in the nonlinear setting. We empirically verify our theoretical findings and show the dependence of learning via structured hashed projections on the performance of neural network as well as nearest neighbor classifier. 1. Introduction The paradigm of binary embedding for data compression is of the central focus of this paper. The paradigm has been studied in some earlier works (see: (Weiss et al., 2008), (Gong et al., 2013), (Plan & Vershynin, 2014), (Yi et al., 2015)), and in particular it was observed that by using linear projections and then applying sign function as a non- 1Equal contribution. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). linear map one does not loose completely the information about the angular distance between vectors, but instead the information might be approximately reconstructed from the Hamming distance between hashes. In this paper we are interested in using pseudo-random projections via structured matrices in the linear projection phase. The pseudorandom projection is described by a matrix, where not all the entries are independent random variables but instead a fixed budget of randomness is distributed across the matrix. Thus they can be efficiently stored in a sub-quadratic or even linear space and provide reduction in the randomness usage. Moreover, using them often leads to computational speed ups since they provide fast matrix-vector multiplications via Fast Fourier Transform. We prove an extension of the Johnson-Lindenstrauss lemma (Sivakumar, 2002) for general pseudo-random structured projections followed by nonlinear mappings. We show that the angular distance between high-dimensional data vectors is approximately preserved in the hashed space. This result is also new compared to previous extensions (Hinrichs & Vybral, 2011; Vybral, 2011) of the Johnson-Lindenstrauss lemma, that consider special cases of our structured projections (namely: circulant matrices) and do not consider at all the action of the non-linear mapping. We give theoretical explanation of the approach that was so far only heuristically confirmed for some special structured matrices. Our theoretical findings imply that many types of matrices, such as circulant or Toeplitz Gaussian matrices, can be used as a preprocessing step in neural networks. Structured matrices were used before in different contexts also in deep learning, see for example (Saxe et al., 2011; Mathieu et al., 2014; Sindhwani et al., 2015)). Our theoretical results however extend to more general class of matrices. Our work has primarily theoretical focus, but we also ask an empirical question: how the action of the random projection followed by non-linear transformation may influence learning? We focus on the deep learning setting, where the architecture contains completely random or pseudo-random structured layers that are not trained. Little Binary embeddings with structured hashed projections is known from the theoretical point of view about these fast deep architectures, which achieve significant speed ups of computation and space usage reduction with simultaneous little or no loss in performance (Saxe et al., 2011; Jarrett et al., 2009; Pinto et al., 2009; Pinto & Cox, 2010; Huang et al., 2006). The high-level intuition justifying the success of these approaches is that not only does the performance of the deep learning system depend on learning, but also on the intrinsic properties of the architecture. These findings coincide with the notion of high redundancy in network parametrization (Denil et al., 2013; Denton et al., 2014; Choromanska et al., 2015). In this paper we consider a simple model of the fully-connected feed-forward neural network where the input layer is hashed by a structured pseudo-random projection followed by a point-wise nonlinear mapping. Thus the input is effectively hashed and learning is conducted in the fully connected subsequent layers that act in the hashed space. We empirically verify how the distortion introduced in the first layer by hashing (where we reduce the dimensionality of the data) affects the performance of the network (in the supervised learning setting). Finally, we show how our structured nonlinear embeddings can be used in the k-nn setting (Altman, 1992). This article is organized as follows: Section 2 discusses related work, Section 3 explains the hashing mechanism, Section 4 provides theoretical results, Section 5 shows experimental results, and Section 6 concludes. Supplement contains additional proofs and experimental results. 2. Related work The idea of using random projections to facilitate learning with high-dimensional data stems from the early work on random projections (Dasgupta, 1999). This idea was subsequently successfully applied to both synthetic and real datasets (Dasgupta, 2000; Bingham & Mannila, 2001), and then adopted to a number of learning approaches such as random projection trees (Dasgupta & Freund, 2008), kernel and feature-selection techniques (Blum, 2006), clustering (Fern & Brodley, 2003), privacy-preserving machine learning (Liu et al., 2006; Choromanska et al., 2013), learning with large databases (Achlioptas, 2003), sparse learning settings (Li et al., 2006), and more recently - deep learning (see (Saxe et al., 2011) for convenient review of such approaches). Using linear projections with completely random Gaussian weights, instead of learned ones, was recently studied from both theoretical and practical point of view in (Giryes et al., 2015), but that work did not consider structured matrices which is a central point of our interest since structured matrices can be stored much more efficiently. Beyond applying methods that use random Gaussian matrix projections (Dasgupta, 1999; 2000; Giryes et al., 2015) and random binary matrix projec- tions (Achlioptas, 2003), it is also possible to construct deterministic projections that preserve angles and distances (Jafarpour et al., 2009). In some sense these methods use structured matrices as well, yet they do not have the same projection efficiency of circulant matrices and projections explored in this article. Our hybrid approach, where a fixed budget of randomness is distributed across the entire matrix in the structured way enables us to take advantage of both: the ability of completely random projection to preserve information and the compactness that comes from the highly-organized internal structure of the linear mapping. This work studies the paradigm of constructing a binary embedding for data compression, where hashes are obtained by applying linear projections to the data followed by the non-linear (sign function) mappings. The point-wise nonlinearity was not considered in many previous works on structured matrices (Haupt et al., 2010; Rauhut et al., 2010; Krahmer et al., 2014; Yap et al., 2011) (moreover note that these works also consider the set of structured matrices which is a strict subset of the class of matrices considered here). Designing binary embeddings for high dimensional data with low distortion is addressed in many recent works (Weiss et al., 2008; Gong et al., 2013; Yi et al., 2015; Raginsky & Lazebnik, 2009; Salakhutdinov & Hinton, 2009). In the context of our work, one of the recent articles (Yi et al., 2015) is especially important since the authors introduce the pipeline of constructing hashes with the use of structured matrices in the linear step, instead of completely random ones. They prove several theoretical results regarding the quality of the produced hash, and extend some previous theoretical results (Jacques et al., 2011; Plan & Vershynin, 2014). Their pipeline is more complicated than ours, i.e. they first apply Hadamard transformation and then a sequence of partial Toeplitz Gaussian matrices. Some general results (unbiasedness of the angular distance estimator) were also known for short hashing pipelines involving circulant matrices ((Yu et al., 2014)). These works do not provide guarantees regarding concentration of the estimator around its mean, which is crucial for all practical applications. Our results for general structured matrices, which include circulant Gaussian matrices and a larger class of Toeplitz Gaussian matrices as special subclasses, provide such concentration guarantees, and thus establish a solid mathematical foundation for using various types of structured matrices in binary embeddings. In contrast to (Yi et al., 2015), we present our theoretical results for simpler hashing models (our hashing mechanism is explained in Section 3). In the context of deep learning, using random network parametrization, where certain layers have random and untrained weights, often accelerates training. Introducing randomness to networks was explored for various archi- Binary embeddings with structured hashed projections tectures, in example feedforward networks (Huang et al., 2006), convolutional networks (Jarrett et al., 2009; Saxe et al., 2011), and recurrent networks (Jaeger & Haas, 2004; White et al., 2004; Boedecker et al., 2009). We also refer the reader to (Ganguli & Sompolinsky, 2012), where the authors describe how neural systems cope with the challenge of processing data in high dimensions and discuss random projections. Hashing in neural networks that we consider in this paper is a new direction of research. Very recently (see: (Chen et al., 2015)) it was empirically showed that hashing in neural nets may achieve drastic reduction in model sizes with no significant loss of the quality, by heavily exploiting the phenomenon of redundancies in neural nets. Hashed Nets introduced in (Chen et al., 2015) do not give any theoretical guarantees regarding the quality of the proposed hashing in contrast to our work. 3. Hashing mechanism In this section we explain our hashing mechanism for dimensionality reduction that we next analyze. 3.1. Structured matrices We first introduce the aforementioned family of structured matrices, that we call: Ψ-regular matrices P. This is the key ingredient of the method. Definition 3.1. M is a circulant Gaussian matrix if its first row is a sequence of independent Gaussian random variables taken from the distribution N(0, 1) and next rows are obtained from the previous ones by either only one-left shifts or only one-right shifts. Definition 3.2. M is a Toeplitz Gaussian matrix if each of its descending diagonals from left to right is of the form (g, ..., g) for some g N(0, 1) and different descending diagonals are independent. Remark 3.1. The circulant Gaussian matrix with right shifts is a special type of the Toeplitz Gaussian matrix. Assume that k is the size of the hash and n is the dimensionality of the data. Definition 3.3. Let t be the size of the pool of independent random Gaussian variables {g1, ..., gt}, where each gi N(0, 1). Assume that k n t kn. We take Ψ to be a natural number, i.e. Ψ N. P is Ψ-regular random matrix if it has the following form l S1,1 gl ... P l S1,j gl ... P l S1,n gl ... ... ... ... ... P l Si,1 gl ... P l Si,j gl ... P l Si,n gl ... ... ... ... ... P l Sk,1 gl ... P l Sk,j gl ... P where Si,j {1, ..., t} for i {1, ..., k}, j {1, ..., n}, |Si,1| = ... = |Si,n| for i = 1, ..., k, Si,j1 Si,j2 = for i {1, ..., k}, j1, j2 {1, ..., n}, j1 = j2, and furthermore the following holds: for any two different rows R1, R2 of P the number of random variables gl, where l {1, ..., t}, such that gl is in the intersection of some column with both R1 and R2 is at most Ψ. Remark 3.2. Circulant Gaussian matrices and Toeplitz Gaussian matrices are special cases of the 0-regular matrices. Toeplitz Gaussian matrix is 0-regular, where subsets Si,j are singletons. In the experimental section of this paper we consider six different kinds of structured matrices, which are examples of general structured matrices covered by our theoretical analysis. Those are: Circulant (see: Definition 3.1), Bin Circ - a matrix, where the first row is partitioned into consecutive equal-length blocks of elements and each row is obtained by the right shift of the blocks from the first row, Half Shift - a matrix, where next row is obtained from the previous one by swapping its halves and then performing right shift by one, Ver Hor Shift - a matrix that is obtained by the following two phase-procedure: first each row is obtained from the previous one by the right shift of a fixed length and then in the obtained matrix each column is shifted up by a fixed number of elements, Bin Perm - a matrix, where the first row is partitioned into consecutive equal-length blocks of elements and each row is obtained as a random permutation of the blocks from the first row, Toeplitz (see: Definition 3.2). Remark 3.3. Computing hashes for structured matrices: Toeplitz, Bin Circ, Half Shift, and Ver Hor Shift can be done faster than in time O(nk) (e.g. for Toeplitz one can use FFT to reduce computations to O(n log k)). Thus our structured approach leads to speed-ups, storage compression (since many structured matrices covered by our theoretical model can be stored in linear space) and reduction in randomness usage. The goal of this paper is not to analyze in details fast matrix-vector product algorithms since that requires a separate paper. We however point out that well-known fast matrix-vector product algorithms are some of the key advantages of our structured approach. 3.2. Hashing methods Let φ be a function satisfying limx φ(x) = 1 and limx φ(x) = 1. We will consider two hashing methods, both of which consist of what we refer to as a preprocessing step followed by the actual hashing step, where Binary embeddings with structured hashed projections the latter consists of pseudo-random projection followed by nonlinear (sign function) mapping. The first mechanism, that we call extended Ψ-regular hashing, applies first random diagonal matrix R to the data point x, then the L2normalized Hadamard matrix H, next another random diagonal matrix D, then the Ψ-regular projection matrix PΨ and finally function φ (the latter one applied point-wise). The overall scheme is presented below: x R x R H x H D x D | {z } preprocessing PΨ x PΨ φ h(x) | {z } hashing The diagonal entries of matrices R and D are chosen independently from the binary set { 1, 1}, each value being chosen with probability 1 2. We also propose a shorter pipeline, that we call short Ψ-regular hashing, where we avoid applying first random matrix R and the Hadamard matrix H, i.e. the overall pipeline is of the form: x D x D | {z } preprocessing PΨ x PΨ φ h(x) | {z } hashing The goal is to compute good approximation of the angular distance between given vectors p, r, given their compact hashed versions: h(p), h(r). To achieve this goal we consider the L1-distances in the k-dimensional space of hashes. Let θp,r denote the angle between vectors p and r. We define the normalized approximate angle between p and r as: θn p,r = 1 2k h(p) h(r) 1 (3) In the next section we show that the normalized approximate angle between vectors p and r leads to a very precise estimation of the actual angle for φ(x) = sign(x) if the chosen parameter Ψ is not too large. Furthermore, we show an intriguing connection between theoretical guarantees regarding the quality of the produced hash and the chromatic number of some specific undirected graph encoding the structure of P. For many of the structured matrices under consideration this graph is induced by an algebraic group operation defining the structure of P (for instance, for the circular matrix the group is a single shift and the underlying graph is a collection of pairwise disjoint cycles, thus its chromatic number is at most 3). 4. Theoretical results 4.1. Unbiasedness of the estimator We are ready to provide theoretical guarantees regarding the quality of the produced hash. Our guarantees will be given for a sign function, i.e for φ defined as: φ(x) = 1 for x 0, φ(x) = 1 for x < 0. Using this nonlinearity will be important to preserve approximate information about the angle between vectors, while filtering out the information Figure 1: Two vectors: p, r spanning two-dimensional hyperplane H and with the angular distance θp,r between them. We have: lp,r = gi D,H, . Line lp,r is dividing θp,r and thus gi contributes to h(p) h(r) 1. Figure 2: Similar setting to the one presented on Figure 1. Vector v represents L2-normalized version of gi and is perpendicular to the two-dimensional plane R. The intersection R H of that plane with the 2-dimensional plane H spanned by p, r is a line lp,r that this time is outside Up,r. Thus gi does not contribute to h(p) h(r) 1. about their lengths. We first show that θn p,r is an unbiased estimator of θp,r π , i.e. E( θn p,r) = θp,r Lemma 4.1. Let M be a Ψ-regular hashing model (either extended or a short one) and p 2 = r 2 = 1. Then θn p,r is an unbiased estimator of θp,r π , i.e. E( θn p,r) = θp,r Let us give a short sketch of the proof first. Note that the value of the particular entry in the constructed hash depends only on the sign of the dot product between the corresponding Gaussian vector representing the row of the Ψ-regular matrix and the given vector. Fix two vectors: p and r with angular distance θ. Note that considered dot products (and thus also their signs) are preserved when instead of taking the Gaussian vector representing the row one takes its projection onto a linear space spanned by p and r. The Hamming distance between hashes of p and r is built up by these entries for which one dot product is negative and the other one is positive. One can note that this happens if the projected vector is inside a 2-dimensional cone covering angle 2θ. The last observation that completes the proof is that the projection of the Gaussian vector is isotropic (since it is also Gaussian), thus the probability that the two dot products will have different signs is θ Proof. Note first that the ith row, call it gi, of the matrix P Binary embeddings with structured hashed projections is a n-dimensional Gaussian vector with mean 0 and where each element has variance σ2 i for σ2 i = |Si,1| = ... = |Si,n| (i = 1, ..., k). Thus, after applying matrix D the new vector gi D is still Gaussian and of the same distribution. Let us consider first the short Ψ-regular hashing model. Fix some vectors p, r (without loss of generality we may assume that they are not collinear). Let Hp,r, shortly called by us H, be the 2-dimensional hyperplane spanned by {p, r}. Denote by gi D,H the projection of gi D into H and by gi D,H, the line in H perpendicular to gi D,H. Let φ be a sign function. Note that the contribution to the L1-sum h(p) h(r) 1 comes from those gi for which gi D,H, divides an angle between p and r (see: Figure 1), i.e. from those gi for which gi D,H is inside the union Up,r of two 2-dimensional cones bounded by two lines in H perpendicular to p and r respectively. If the angle is not divided (see: Figure 2) then the two corresponding entries in the hash have the same value and thus do not contribute to the overall distance between hashes. Observe that, from what we have just said, we can conclude that θn p,r = X1+...+Xk Xi = 1 if gi D,H Up,r, 0 otherwise. (4) Now it suffices to note that vector gi D,H is a 2-dimensional Gaussian vector and thus its direction is uniformly distributed over all directions. Thus each Xi is nonzero with probability exactly θp,r π and the theorem follows. For the extended Ψ-regular hashing model the analysis is very similar. The only difference is that data is preprocessed by applying HR linear mapping first. Both H and R are orthogonal matrices though, thus their product is also an orthogonal matrix. Since orthogonal matrices do not change angular distance, the former analysis can be applied again and yields the proof. We next focus on the concentration of the random variable θn p,r around its mean θp,r π . We prove strong exponential concentration results for the extended Ψ-regular hashing method. Interestingly, the application of the Hadamard mechanism is not necessary and it is possible to get concentration results, yet weaker than in the former case, also for short Ψ-regular hashing. 4.2. The P-chromatic number The highly well organized structure of the projection matrix P gives rise to the underlying undirected graph that encodes dependencies between different entries of P. More formally, let us fix two rows of P of indices 1 k1 < k2 k respectively. We define a graph GP(k1, k2) as follows: V (GP(k1, k2)) = {{j1, j2} : l {1, ..., t}s.t.gl Sk1,j1 Sk2,j2, j1 = j2}, there exists an edge between vertices {j1, j2} and {j3, j4} iff {j1, j2} {j3, j4} = . The chromatic number χ(G) of the graph G is the minimal number of colors that can be used to color the vertices of the graph in such a way that no two adjacent vertices have the same color. Definition 4.1. Let P be a Ψ-regular matrix. We define the P-chromatic number χ(P) as: χ(P) = max 1 k1 0. The following theorems guarantee strong concentration of θn p,r around its mean and therefore justify theoretically the effectiveness of the structured hashing method. Let us consider first the extended Ψ-regular hashing model. Theorem 4.1. Consider extended Ψ-regular hashing model M with t independent Gaussian random variables: g1, ..., gt, each of distribution N(0, 1). Let N be the size of the dataset D. Denote by k the size of the hash and by n the dimensionality of the data. Let f(n) be an arbitrary positive function. Let θp,r be the angular distance between vectors p, r D. Then for a = on(1), ϵ > 0, t n and n large enough: θn p,r θp,r 2 4χ(P) k 2 f4(t) (1 Λ), where Λ = 1 π Pk j= ϵk j )jµj(1 µ)k j + 2e ϵ2k and µ = 8( aχ(P)+Ψ f2(n) n ) Note how the upper bound on the probability of failure Pϵ depends on the P-chromatic number. The theorem above guarantees strong concentration of θn p,r around its mean and therefore justifies theoretically the effectiveness of the structured hashing method. It becomes more clear below. As a corollary, we obtain the following result: Corollary 4.1. Consider extended Ψ-regular hashing model M. Assume that the projection matrix P is Toeplitz Gaussian. Let N, n, k be as above and denote by θp,r be the angular distance between vectors p, r D. Then the following is true for n large enough: θn p,r θp,r epoly(n) + k2e n 3 10 1 3e k 1 3 2 . Figure 4: The dependence of the upper bound on the variance of the normalized approximate angle θn p,r on (left:) an angle when the size of the hash k is fixed (the upper bound scales as 1 k and is almost independent of θp,r), (right:) the size of the hash k when the true angular distance θp,r is fixed (the upper bound converges to 0 as k ). Corollary 4.1 follows from Theorem 4.1 by taking: a = n 1 3 , ϵ = k 1 3 , f(n) = np for small enough constant p > 0, noticing that every Toeplitz Gaussian matrix is 0regular and the corresponding P-chromatic number χ(P) is at most 3. epoly(n) is related to the balancedness property. To clarify, the goal of multiplying by HR in the preprocessing step is to make each input vector balanced, or in other words to spread out the mass of the vector across all the dimensions in approximately uniform way. This property is required to obtain theoretical results (also note it was unnecessary in the unstructured setting) and does not depend on the number of projected dimensions. Let us consider now the short Ψ-regular hashing model. The theorem presented below is an application of the Chebyshev s inequality preceded by the careful analysis of the variance of θn p,r. Theorem 4.2. Consider short Ψ-regular hashing model M, where P is a Toeplitz Gaussian matrix. Denote by k the size of the hash. Let θp,r be the angular distance between vectors p, r D, where D is the dataset. Then the Binary embeddings with structured hashed projections following is true p,r DV ar( θn p,r) 1 k θp,r(π θp,r) π2 + log(k) and thus for any c > 0 and p, r D: θn p,r θp,r Figure 4 shows the dependence of the upper bound on the variance of the normalized approximate angle θn p,r on resp. the true angular distance θp,r and the size of the hash k when resp. k and θp,r are fixed. 3 that appears in the theoretical results we obtained and the non-linear with k variance decay of Figure 4 (right) is a consequence of the structured setting, where the quality of the nonlinear embedding is affected by the existence of dependencies between entries of the structured matrix. 5. Numerical experiments In this section we demonstrate that all considered structured matrices achieve reasonable performance in comparison to fully random matrices. Specifically we show: i) the dependence of the performance on the size of the hash and the reduction factor n k for different structured matrices and ii) the performance of different structured matrices when used with neural networks and 1-NN classifier. Experiments confirm our novel theoretical results. . . . . . . Figure 5: Fully-connected network with randomized input layer (red edges correspond to structured matrix). k < n. D is a random diagonal matrix with diagonal entries chosen independently from the binary set { 1, 1}, each value being chosen with probability 1 2, and P is a structured matrix. The figure should be viewed in color. We performed experiments on MNIST dataset downloaded from http://yann.lecun.com/exdb/mnist/. The data was preprocessed2 according to the short hashing scheme (the extended hashing scheme gave results of no significant statistical difference) before being given to 2Preprocessing is discussed in Section 3. the input of the network. We first considered a simple model of the fully-connected feed-forward neural network with two hidden layers, where the first hidden layer had k units that use sign non-linearity (we explored k = {16, 32, 64, 128, 256, 512, 1024}), and the second hidden layer had 100 units that use Re LU non-linearity. The size of the second hidden layer was chosen as follows. We first investigated the dependence of the test error on this size in case when n = k and the inputs instead of being randomly projected are multiplied by identity (it is equivalent to eliminating first hidden layer). We then chose as a size the threshold below which test performance was rapidly deteriorating. The first hidden layer contains random untrained weights, and we only train the parameters of the second layer and the output layer. The network we consider is shown in Figure 5. Each experiment was initialized from a random set of parameters sampled uniformly within the unit cube, and was repeated 1000 times. All networks were trained for 30 epochs using SGD (Bottou, 1998). The experiments with constant learning rate are reported (we also explored learning rate decay, but obtained similar results), where the learning rate was chosen from the set {0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1} to minimize the test error. The weights of the first hidden layer correspond to the entries in the preprocessed structured matrix. We explored seven kinds of random matrices (first six are structured): Circulant, Toeplitz, Half Shift, Ver Hor Shift, Bin Perm, Bin Circ, and Random (entries are independent and drawn from Gaussian distribution N(0, 1)). All codes were implemented in Torch7. Figure 6a shows how the mean test error is affected by the size of the hash, and Figure 6b shows how the mean test error changes with the size of the reduction, where the size of the reduction is defined as the ratio n/k. In Table 2 in the Supplement we report both the mean and the standard deviation (std) of the test error across our experiments. Training results are reported in the Supplementary material. Baseline refers to the network with one hidden layer containing 100 hidden units, where all parameters are trained. Experimental results show how the performance is affected by using structured hashed projections to reduce data dimensionality. Figure 6b and Table 2 in the Supplement show close to linear dependence between the error and the size of the reduction. Simultaneously, this approach leads to computational savings and the reduction of memory storage. i.e. the reduction of the number of input weights for the hidden layer (in example for Circulant matrix this reduction is of the order O(n/k)3). Memory complexity, i.e. memory required to store the matrix, and the number of re- 3The memory required for storing Circulant matrix is negligible compared to the number of weights. Binary embeddings with structured hashed projections Table 1: Memory complexity and number of required random values for structured matrices and Random matrix. Matrix Random Circulant Bin Perm Half Shift Ver Hor Shift Bin Circ Toeplitz # of random values O(nk) O(n) O(n) O(n) O(n) O(n) O(n) Memory complexity O(nk) O(n) O(nk) O(n) O(n) O(n) O(n) quired random values for different structured matrices and Random matrix are summarized in Table 1. b) Figure 6: Mean test error versus a) the size of the hash (k) (zoomed plot4), b) the size of the reduction (n/k) for the network. Baseline corresponds to 1.7%. Experiments show that using fully random matrix gives the best performance as predicted in theory. Bin Perm matrix exhibits comparable performance to the Random matrix, which might be explained by the fact that applying permutation itself adds an additional source of randomness. The next best performer is Half Shift, which generation is less random than in case of Bin Perm or Random. Thus its performance, as expected, is worse than for these two other matrices. However, as opposed to Bin Perm and Random matrices, Half Shift matrix can be stored in linear space. The results also show that in general all structured matrices perform relatively well for medium-size reductions. Finally, all structured matrices except for Bin Perm lead to the biggest memory savings and require the smallest budget of randomness . Moreover, they often lead to computational efficiency, e.g. Toeplitz matrix-vector multiplications can be efficiently implemented via Fast Fourier Transform (Yu et al., 2014). But, as mentioned before, faster than naive 4Original plot is in the Supplement. matrix-vector product computations can be performed also for Bin Perm, Half Shift, and Ver Hor Shift. Figure 7: Mean test error versus a) the size of the hash (k) (zoomed plot4), b) the size of the reduction (n/k) for 1-NN. Baseline corresponds to 4.5%. Finally, we also report how the performance of 1-NN algorithm is affected by using structured hashed projections for the dimensionality reduction. We obtained similar plots as for the case of neural networks. They are captured in Figure 7. The table showing the mean and the standard deviation of the test error for experiments with 1-NN is enclosed in the Supplementary material. 6. Conclusions This paper shows that using structured hashed projections well-preserves the angular distance between input data instances. Our theoretical results consider mapping the data to lower-dimensional space using various structured matrices, where the structured linear projections are followed by the sign nonlinearity. We empirically verify our theoretical findings and show how using structured hashed projections for dimensionality reduction affects the performance of neural network and nearest neighbor classifier. Binary embeddings with structured hashed projections Achlioptas, D. Database-friendly random projections: Johnson-lindenstrauss with binary coins. J. Comput. Syst. Sci., 66(4):671 687, 2003. Altman, N. S. An Introduction to Kernel and Nearest Neighbor Nonparametric Regression. The American Statistician, 46(3):175 185, 1992. Bingham, E. and Mannila, H. Random projection in dimensionality reduction: Applications to image and text data. In KDD, 2001. Blum, A. Random projection, margins, kernels, and feature-selection. In SLSFS, 2006. Boedecker, J., Obst, O., Mayer, N. M., and Asada, M. Initialization and self-organized optimization of recurrent neural network connectivity. HFSP journal, 3(5):340 9, 2009. Bottou, L. Online algorithms and stochastic approximations. In Online Learning and Neural Networks. Cambridge University Press, 1998. Chen, W., Wilson, J. T., Tyree, S., Weinberger, K. Q., and Chen, Y. Compressing neural networks with the hashing trick. Co RR, abs/1504.04788, 2015. Choromanska, A., Choromanski, K., Jagannathan, G., and Monteleoni, C. Differentially-private learning of low dimensional manifolds. In ALT, 2013. Choromanska, A., Henaff, M., Mathieu, M., Arous, G. Ben, and Le Cun, Y. The loss surfaces of multilayer networks. In AISTATS, 2015. Dasgupta, S. Learning mixtures of gaussians. In FOCS, 1999. Dasgupta, S. Experiments with random projection. In UAI, 2000. Dasgupta, S. and Freund, Y. Random projection trees and low dimensional manifolds. In STOC, 2008. Denil, M., Shakibi, B., Dinh, L., Ranzato, M., and Freitas, N. D. Predicting parameters in deep learning. In NIPS. 2013. Denton, E., Zaremba, W., Bruna, J., Le Cun, Y., and Fergus, R. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS. 2014. Fern, X. Z. and Brodley, C. E. Random projection for high dimensional data clustering: A cluster ensemble approach. In ICML, 2003. Ganguli, S. and Sompolinsky, H. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review of Neuroscience, 35:485 508, 2012. Giryes, R., Sapiro, G., and Bronstein, A. M. Deep neural networks with random gaussian weights: A universal classification strategy? Co RR, abs/1504.08291, 2015. Gong, Y., Lazebnik, S., Gordo, A., and Perronnin, F. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916 2929, 2013. Haupt, J., Bajwa, W. U., Raz, G., and Nowak, R. Toeplitz compressed sensing matrices with applications to sparse channel estimation. Information Theory, IEEE Transactions on, 56(11):5862 5875, 2010. Hinrichs, A. and Vybral, J. Johnson-lindenstrauss lemma for circulant matrices. Random Struct. Algorithms, 39 (3):391 398, 2011. Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing, 70(13):489 501, 2006. Jacques, L., Laska, J. N., Boufounos, P., and Baraniuk, R. G. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. Co RR, abs/1104.3160, 2011. Jaeger, H. and Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, pp. 78 80, 2004. Jafarpour, S., Xu, W., Hassibi, B., and Calderbank, R. Efficient and Robust Compressed Sensing Using Optimized Expander Graphs. Information Theory, IEEE Transactions on, 55(9):4299 4308, 2009. Jarrett, K., Kavukcuoglu, K., Ranzato, M., and Le Cun, Y. What is the best multi-stage architecture for object recognition? In ICCV, 2009. Krahmer, F., Mendelson, S., and Rauhut, H. Suprema of chaos processes and the restricted isometry property. Communications on Pure and Applied Mathematics, 67 (11):1877 1904, 2014. Li, P., Hastie, T. J., and Church, K. W. Very sparse random projections. In KDD, 2006. Liu, K., Kargupta, H., and Ryan, J. Random projectionbased multiplicative data perturbation for privacy preserving distributed data mining. IEEE Trans. on Knowl. and Data Eng., 18(1):92 106, 2006. Binary embeddings with structured hashed projections Mathieu, M., Henaff, M., and Le Cun, Y. Fast training of convolutional networks through ffts. In ICLR, 2014. Pinto, N. and Cox, D. D. An Evaluation of the Invariance Properties of a Biologically-Inspired System for Unconstrained Face Recognition. In BIONETICS, 2010. Pinto, N., Doukhan, D., Di Carlo, J. J., and Cox, D. D. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLo S Computational Biology, 5(11), 2009. Plan, Y. and Vershynin, R. Dimension reduction by random hyperplane tessellations. Discrete & Computational Geometry, 51(2):438 461, 2014. Raginsky, M. and Lazebnik, S. Locality-sensitive binary codes from shift-invariant kernels. In NIPS. 2009. Rauhut, H., Romberg, J. K., and Tropp, J. A. Restricted isometries for partial random circulant matrices. Co RR, abs/1010.1847, 2010. Salakhutdinov, R. and Hinton, G. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969 978, 2009. Saxe, A., Koh, P. W., Chen, Z., Bhand, M., Suresh, B., and Ng, A. On random weights and unsupervised feature learning. In ICML, 2011. Sindhwani, V., Sainath, T., and Kumar, S. Structured transforms for small-footprint deep learning. In NIPS, 2015. Sivakumar, D. Algorithmic derandomization via complexity theory. In STOC, 2002. Vybral, J. A variant of the johnsonlindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260 (4):1096 1105, 2011. Weiss, Y., Torralba, A., and Fergus, R. Spectral hashing. In NIPS, 2008. White, O. L., Lee, D. D., and Sompolinsky, H. Short-term memory in orthogonal neural networks. Physical review letters, 92(14), 2004. Yap, H.L., Eftekhari, A., Wakin, M.B., and Rozell, C.J. The restricted isometry property for block diagonal matrices. In CISS, 2011. Yi, X., Caramanis, C., and Price, E. Binary embedding: Fundamental limits and fast algorithm. Co RR, abs/1502.05746, 2015. Yu, F. X., Kumar, S., Gong, Y., and Chang, S.-F. Circulant binary embedding. In ICML, 2014.