# on_adaptive_distance_estimation__6d146d55.pdf On Adaptive Distance Estimation Yeshwanth Cherapanamjeri Electrical Engineering and Computer Science University of California at Berkeley Berkeley, CA 94720 yeshwanth@berkeley.edu Jelani Nelson Electrical Engineering and Computer Science University of California at Berkeley Berkeley, CA 94720 minilek@berkeley.edu We provide a static data structure for distance estimation which supports adaptive queries. Concretely, given a dataset X = {xi}n i=1 of n points in Rd and 0 < p 2, we construct a randomized data structure with low memory consumption and query time which, when later given any query point q 2 Rd, outputs a (1 + ")- approximation of kq xikp with high probability for all i 2 [n]. The main novelty is our data structure s correctness guarantee holds even when the sequence of queries can be chosen adaptively: an adversary is allowed to choose the jth query point qj in a way that depends on the answers reported by the data structure for q1, . . . , qj 1. Previous randomized Monte Carlo methods do not provide error guarantees in the setting of adaptively chosen queries [JL84, Ind06, TZ12, IW18]. Our memory consumption is O((n + d)d/"2), slightly more than the O(nd) required to store X in memory explicitly, but with the benefit that our time to answer queries is only O(" 2(n + d)), much faster than the naive (nd) time obtained from a linear scan in the case of n and d very large. Here O hides log(nd/") factors. We discuss applications to nearest neighbor search and nonparametric estimation. Our method is simple and likely to be applicable to other domains: we describe a generic approach for transforming randomized Monte Carlo data structures which do not support adaptive queries to ones that do, and show that for the problem at hand, it can be applied to standard nonadaptive solutions to p norm estimation with negligible overhead in query time and a factor d overhead in memory. 1 Introduction In recent years, much research attention has been directed towards understanding the performance of machine learning algorithms in adaptive or adversarial environments. In diverse application domains ranging from malware and network intrusion detection [BCM+17, CBK09] to strategic classification [HMPW16] to autonomous navigation [PMG16, LCLS17, PMG+17], the vulnerability of machine learning algorithms to malicious manipulation of input data has been well documented. Motivated by such considerations, we study the problem of designing efficient data structures for distance estimation, a basic primitive in algorithms for nonparametric estimation and exploratory data analysis, in the adaptive setting where the sequence of queries made to the data structure may be adversarially chosen. Concretely, the distance estimation problem is defined as follows: Problem 1.1 (Approximate Distance Estimation (ADE)). For a known norm k k we are given a set of vectors X = {xi}n i=1 Rd and an accuracy parameter " 2 (0, 1), and we must produce some data structure D. Then later, given only D stored in memory with no direct access to X, we must respond to queries that specify q 2 Rd by reporting distance estimates d1, . . . , dn satisfying 8i 2 [n], (1 ")kq xik di (1 + ")kq xik. 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada. The quantities we wish to minimize in a solution to ADE are (1) pre-processing time (the time to compute D given X), (2) memory required to store D (referred to as space complexity ), and (3) query time (the time required to answer a single query). The trivial solution is to, of course, simply store X in memory explicitly, in which case pre-processing time is zero, the required memory is O(nd), and the query time is O(nd) (assuming the norm can be computed in linear time, which is the case for the norms we focus on in this work). Standard solutions to ADE are via randomized linear sketching: one picks a random sketching matrix 2 Rm d for some m d and stores yi = xi in memory for each i. Then to answer a query, q, we return some estimator applied to (q xi) = q yi. Specifically in the case of 2, one can use the Johnson-Lindenstrauss lemma [JL84], AMS sketch [AMS99], or Count Sketch [CCF04, TZ12]. For p norms 0 < p < 2, one can use Indyk s p-stable sketch [Ind06] or that of [KNPW11]. Each of these works specifies some distribution over such , together with an estimation procedure. All these solutions have the advantage that m = O(1/"2), so that the space complexity of storing y1, . . . , yn would only be O(n/"2) instead of O(nd). The runtimes for computing x for a given x range from O(d/"2) to O(d) for 2 and from O(d/"2) to O(d) for p ([KNPW11]). However, estimating kq xikp from q yi takes O(n/"2) time in all cases. Notably, the recent work of Indyk and Wagner [IW18] is not based on linear sketching and attains the optimal space complexity in bits required to solve ADE in Euclidean space, up to an O(log(1/")) factor. One downside of all the prior work mentioned in the previous paragraph is that they give Monte Carlo randomized guarantees that do not support adaptive queries, i.e. the ability to choose a query vector based on responses by the data structure given to previous queries. Specifically, all these data structures provide a guarantee of the form 8q 2 Rd, Ps(data structure correctly responds to q) 1 δ, where s is some random seed , i.e. a random string, used to construct the data structure (for linear sketches specifically, s specifies ). The main point is that q is not allowed to depend on s; q is first fixed, then s is drawn independently. Thus, in a setting in which we want to support a potentially adversarial sequence of queries q1, q2, . . . , where qj may depend on the data structure s responses to q1, . . . , qj 1, the above methods do not provide any error guarantees since responses to previous queries are correlated with s. Thus, if qj is a function of those responses, it, in turn, is correlated with s. In fact, far from being a technical inconvenience, explicit attacks exploiting such correlations were constructed against all approaches based on linear sketching ([HW13]), rendering them open to exploitation in the adversarial scenario. We present our results in the above context: Our Main Contribution. We provide a new data structure for ADE in the adaptive setting, for p norms (0 < p 2) with memory consumption O((n+d)d/"2), slightly more than the O(nd) required to store X in memory explicitly, but with the benefit that our query time is only O(" 2(n + d)) as opposed to the O(nd) query time of the trivial algorithm. The pre-processing time is O(nd2/"2). Our solution is randomized and succeeds with probability 1 1/ poly(n) for each query. Unlike the previous work discussed, the error guarantees hold even in the face of adaptive queries. In the case of Euclidean space (p = 2), we are able to provide sharper bounds with fewer logarithmic factors. Our formal theorem statements appear later as Theorems 4.1 and B.1. Consider for example the setting where " is a small constant, like 0.1 and n > d. Then, the query time of our algorithm is optimal up to logarithmic factors; indeed just reading the input then writing the output of the distance estimates takes time (n + d). Secondly, a straightforward encoding argument implies that any such approach must have space complexity at least (nd) bits (see Section C) which means that our space complexity is nearly optimal as well. Finally, pre-processing time for the data structure can be improved by using fast algorithms for rectangular matrix multiplication (See Section 4 for further discussion). 1.1 Related Work As previously discussed, there has been growing interest in understanding risks posed by the deployment of algorithms in potentially adversarial settings ([BCM+17, HMPW16, GSS15, YHZL19, LCLS17, PMG16]). In addition, the problem of preserving statistical validity in exploratory data analysis has been well explored [DFH+15a, BNS+16, DFH+15b, DFH+15c, DSSU17] where the goal is to maintain coherence with an unknown distribution from which one obtains data samples. There has also been previous work studying linear sketches in adversarial scenarios quite different from those appearing here ([MNS11, GHR+12, GHS+12]). Specifically on data structures, it is, of course, the case that deterministic data structures provide correctness guarantees for adaptive queries automatically, though we are unaware of any non-trivial deterministic solutions for ADE. For the specific application of approximate nearest neighbor, the works of [Kle97, KOR00] provide non-trivial data structures supporting adaptive queries; a comparison with our results is given in Subsection 1.2. In the context of streaming algorithms (i.e. sublinear memory), the very recent work of Ben-Eliezer et al. [BEJWY20] considers streaming algorithms with both adaptive queries and updates. One key difference is they considered the insertion-only model of streaming, which does not allow one to model computing some function of the difference of two vectors (e.g. the norm of q xi). 1.2 More on applications Nearest neighbor search: Obtaining efficient algorithms for Nearest Neighbor Search (NNS) has been a topic of intense research effort over the last 20 years, motivated by diverse applications spanning computer vision, information retrieval and database search [BM01, SDI08, DIIM04]. While fast algorithms for exact NNS have impractical space complexities ([Cla88, Mei93]), a line of work, starting with the foundational results of [IM98, KOR00], have resulted in query times sub-linear in n for the approximate variant. Formally, the Approximate Nearest Neighbor Problem (ANN) is defined as follows: Problem 1.2 (Approximate Nearest Neighbor). Given X = {xi}i2[n] Rd, norm k k, and approximation factor c > 1, create a data structure D such that in the future, for any query point q 2 Rd, D will output some i 2 [n] satisfying kq xik c minj2[n]kq xjk. The above definition requires the algorithm to return a point from the dataset whose distance to the query point is close to the distance of the exact nearest neighbor. The Locality Sensitive Hashing (LSH) approach of [IM98] gives a Monte Carlo randomized approach with low memory and query time, but it does not support adaptive queries. There has also been recent interest in obtaining Las Vegas versions of such algorithms [Ahl17, Wei19, Pag18, SW17]. Unfortunately, those works also do not support adaptive queries. More specifically, these Las Vegas algorithms always answer (even adaptive) queries correctly, but their query times are random variables that are guaranteed to be small in expectation only when queries are made non-adaptively. The algorithms of [KOR00, Kle97] do support adaptive queries. However, those algorithms though they have small query time, use large space; [KOR00] uses (n O(1/"2)) space for c = 1 + ", and [Kle97] uses (nd) space. The work of [Kle97] also presents another algorithm with memory and query/pre-processing times similar to our ADE data structure though specifically for Euclidean space. While both of these works provide algorithms with runtimes sublinear in n (at the cost of large space complexity), they are specifically for finding the approximate single nearest neighbor ( 1-NN ) and do not provide distance estimates to all points in the same query time (e.g. if one wanted to find the k approximate nearest neighbors for a k-NN classifier). Nonparametric estimation: While NNS is a vital algorithmic primitive for some fundamental methods in nonparametric estimation, it is inadequate for others, where a few near neighbors do not suffice or the number of required neighbors is unknown. For example, consider the case of kernel regression where a prediction for a query point, q, is given by ˆy = Pn i=1 wiyi where wi = K(q,xi)/Pn i=1 K(q,xi) for some kernel function K and yi is the label for the ith data point. For this and other nonparametric models including SVMs, distance estimates to potentially every point in the dataset may be required [WJ95, HSS08, Alt92, Sim96, AMS97]. Even for simpler tasks like k-nearest neighbor classification or database search, it is often unclear what the right value of k should be and is frequently chosen at test time based on the query point. Unfortunately, modifying previous approaches to return k nearest neighbors instead of 1, results in a factor k increase in query time. Due to the ubiquity of such methods in practical applications, developing efficient versions deployable in adversarial settings is an important endeavor for which an adaptive ADE procedure is a useful primitive. 1.3 Overview of Techniques Our main idea is quite simple and generic, and thus, we believe it could be widely applicable to a number of other problem domains. In fact, it is so generic that it is most illuminating to explain our approach from the perspective of an arbitrary data structural problem instead of focusing on ADE specifically. Suppose we have a randomized Monte Carlo data structure D for some data structural problem that supports answering nonadaptive queries from some family Q of potential queries (in the case of ADE, Q = Rd, so that the allowed queries are the set of all q 2 Rd). Suppose further that l independent instantiations of D, {Di}l i=1, satisfy the following: 1 {Di answers q correctly} 0.9l (Rep) with high probability. Since the above holds for all q 2 Q, it is true even for any query in an adaptive sequence of queries. The Chernoff bound implies that to answer a query successfully with probability 1 δ, one can sample r = (log(1/δ)) indices i1, . . . , ir 2 [l], query each Dij for j 2 [r], then return the majority vote (or e.g. median if the answer is numerical and correctness guarantees are approximate). Note that the runtime of this procedure is at most r times the runtime of an individual Di and this constitutes the main benefit of the approach: during queries not all copies of D must be queried, but only a random sample. Now, defining for any data structure, D: l (D, δ) := inf{l > 0 : Rep holds for D with probability at least 1 δ}, the argument in the preceding paragraph now yields the following general theorem: Theorem 1.3. Let D be a randomized Monte Carlo data structure over a query space Q, and δ 2 (0, 1/2) be given. If l (D, δ) is finite, then there exists a data structure D0, which correctly answers any q 2 Q, even in a sequence of adaptively chosen queries, with probability at least 1 2δ. Furthermore, the query time of D0 is at most O(log 1/δ tq) where tq is the query time of D and its space complexity and pre-processing time are at most l (D, δ) times those of D. In the context of ADE, the underlying data structures will be randomized linear sketches and the data structural problem we require them to solve is length estimation; that is, given a vector v, we require that at least 0.9l of the linear sketches accurately represent its length (See Section 4). In our applications, we select a random sample of r = (log n/δ) linear sketches, use them obtain r estimates of kq xikp and aggregate these estimates by computing their median. The argument in the previous paragraph along with a union bound shows that this strategy succeeds in returning accurate estimates of kq xikp with probability at least 1 δ. The main impediment to deploying Theorem 1.3 in a general domain is obtaining a reasonable bound on l such that Rep holds with high probability. In the case that Q is finite, the Chernoff bound implies the upper bound l (D, δ) = O(log(|Q|/δ)). However, since Q = Rd in our context, this is nonsensical. Nevertheless, we show that a bound of l = O(d) suffices for the ADE problem for p norms with 0 < p 2 and can be tightened to O(d) for the Euclidean case. We believe that reasonable bounds on l establishing Rep can be obtained for a number of other applications yielding a generic procedure for constructing adaptive data structures from nonadaptive ones in all these scenarios. Indeed, the vast literature on Empirical Process Theory yields bounds of precisely this nature which we exploit as a special case in our setting. Organization: For the remainder for the paper, we review preliminary material and introduce necessary notation in Section 2, formally present our algorithms in Section 3 and analyze their run time and space complexities in Section 4 before concluding our discussion in Section 6. 2 Preliminaries and Notation We use d to represent dimension, and n is the number of data points. For a natural number k, [k] denotes {1, . . . , k}. For 0 < p 2, we will use kvkp = (Pd i=1|vi|p)1/p to denote the p norm of v (For p < 1, this is technically not a norm). For a matrix M 2 Rl m, k Mk F denotes the Frobenius norm of M: (P i,j)1/2. Henceforth, when the norm is not specified, kvk and k Mk denote the standard Euclidean (p = 2) and spectral norms of v and M respectively. For a vector v 2 Rd and real valued random variable Y , we will abuse notation and use Median(v) and Median(Y ) to denote the median of the entries of v and the distribution of Y respectively. For a probabilistic event, E, we use 1 {E} to denote the indicator random variable for E. Finally, we will use Sd p = {x 2 Rd : kxkp = 1} One of our algorithms makes use of the p-stable sketch of Indyk [Ind06]. Recall the following concerning p-stable distributions: Definition 2.1. [Zol86, Nol18] For p 2 (0, 2], there exists a probability distribution, Stab(p), called the p-stable distribution with E[e it Z] = e |t|p for Z s Stab(p). Furthermore, for any n, vector v 2 Rn and Z1, . . . Zn are iid samples from Stab(p), we have Pn i=1 vi Zi s kvkp Z with Z s Stab(p). Note the Gaussian distribution is 2-stable, and hence, these distributions can be seen as generalizing the stable properties of a gaussian distribution for norms other than Euclidean. These distributions have found applications in steaming algorithms and approximate nearest neighbor search [Ind06, DIIM04] and moreover, it is possible to efficiently obtain samples from them [CMS76]. We will use Medp to denote Median(|Z|) where Z Stab(p). Finally, we will use N(0, σ2) to denote the distribution function of a normal random variable with mean 0 and variance σ2. 3 Algorithms As previously mentioned, our construction combines the approach outlined in Subsection 1.3 with known linear sketches [JL84, Ind06] and a net argument. Both Theorem Theorems 4.1 and B.1 are proven using this recipe, with the only differences being a swap in the underlying data structure (or linear sketch) being used and the argument used to establish a bound on l satisfying Rep (discussed in Section 1.3). Concretely, in our solutions to ADE we pick l = O(d) linear sketch matrices 1, . . . , l 2 Rm d for m = O(1/"2). The data structure stores ixj for all i 2 [l], j 2 [n]. Then to answer a query q 2 Rd: 1. Select a set of r = O(log n) indices jk 2 [l] uniformly at random, with replacement 2. For each i 2 [n], k 2 [r], obtain distance estimates di,k based on jkxi (stored) and jkq. These estimates are obtained from the underlying sketching algorithm (for 2 it is k jkq jkxik2 [JL84], and for p for 0 < p < 2 it is Median(| jkq jkxi|)/Medp [Ind06] where | | for a vector denotes entry-wise absolute value. 3. Return distance estimates d1, . . . , dn with di = Median( di,1, . . . , di,r) As seen above, the only difference between our algorithms for the Euclidean and p norm cases are the distributions used for the j, as well as the method for distance estimation in Step 2. Since the algorithms are quite similar (though the analysis for the Euclidean case is sharpened to remove logarithmic factors), we discuss the case of p (0 < p < 2) in this section and defer our special treatment of the Euclidean case to Appendix B. Algorithm 1 constructs the linear embeddings for the case p norms, 0 < p < 2, using [Ind06]. The algorithm takes as input the dataset X, an accuracy parameter " and failure probability δ, and it constructs a data structure containing the embedding matrices and the embeddings jxi of the points in the dataset. The linear embeddings are subsequently used in Algorithm 2 to answer queries. Algorithm 1 Compute Data Structure ( p space for 0 < p < 2, based on [Ind06]) Input: X = {xi 2 Rd}n i=1, Accuracy " 2 (0, 1), Failure Probability δ 2 (0, 1) l ((d + log 1/δ) log d/") m For j 2 [l], let j 2 Rm d with entries drawn iid from Stab(p) Output: D = { j, { jxi}n Algorithm 2 Process Query ( p space for 0 < p < 2, based on [Ind06]) Input: Query q, Data Structure D = { j, { jxi}n i=1}j2[l], Failure Probability δ 2 (0, 1) r (log(n/δ)) Sample j1, . . . jr iid with replacement from [l] For i 2 [n], k 2 [r], let di,k Median(| jk (q xi)|) Medp For i 2 [n], let di Median({ di,k}r k=1) Output: { di}n In this section we prove our main theorem for p spaces, 0 < p < 2. We then prove Theorem B.1 for the Euclidean case in Appendix B. Theorem 4.1. For any 0 < δ < 1 and any 0 < p < 2, there is a data structure for the ADE problem in p space that succeeds on any query with probability at least 1 δ, even in a sequence of adaptively chosen queries. Furthermore, the time taken by the data structure to process each query is O " 2(n + d) log 1/δ , the space complexity is O " 2(n + d)(d + log 1/δ) , and the pre-processing time is O " 2nd(d + log 1/δ) The data structures, Dj, that we use in our instantiation of the strategy described in Subsection 1.3 are the random linear sketches, j, and the data structural problem they implicitly solve is length estimation; that is, given any vector v 2 Rd, at least 0.9l of the linear sketches, j, accurately represent its length. To ease exposition, we will now directly reason about the matrices j through the rest of the proof. We start with a theorem of [Ind06]: Theorem 4.2 ([Ind06]). Let 0 < p < 2, 2 (0, 1) and m as in Algorithm 1. Then, for 2 Rm d with entries drawn iid from Stab(p) and any v 2 Rd: kvkp Median(| v|) One can also find an analysis of Theorem 4.2 that only requires to be pseudorandom and thus require less memory to store; see the proof of Theorem 2.1 in [KNW10]. We now formalize the Rep requirement on the data structures { j}l j=1 in our particular context: Definition 4.3. Given " > 0 and 0 < p < 2, we say that a set of matrices { j 2 Rm d}l j=1 is (", p)-representative if: 8kxkp = 1 : (1 ") Median(| jx|) We now show that { j}l j=1, output by Algorithm 1, are (", p)-representative. Lemma 4.4. For p 2 (0, 2), ", δ 2 (0, 1) and l as in Algorithm 1, the collection { j}l j=1 output by Algorithm 1 are (", p)-representative with probability at least 1 δ/2. Proof. We start with the simple lemma: Lemma 4.5. Let 0 < p < 2, C > 0 a sufficiently large constant. Suppose 2 Rm d is distributed as follows: the ij are iid with ij s Stab(p), with m as in Algorithm 1. Then P(k k F C(dm)(2+p)/(2p)) 0.975 Proof. From Corollary A.2 a p-stable random variable Z satisfies P(|Z| t) = O(t p). Thus by the union bound for large enough constant C: 9i, j : | ij| C(dm)1/po | ij| C(dm)1/po Therefore, we have that with probability probability at least 0.975, k k F C(dm)(2+p)/(2p). Now, let = C(dm)(2+p)/(2p) and define B Rd to be a γ-net (Definition A.3) of the unit sphere Sd p under 2 distance, with γ = ("(dm) (2+p)/(2p)). Recall that B being a γ-net under 2 means for all x 2 Sp9x0 2 B s.t. kx x0k2 γ and that we may assume |B| (3/γ)d (Lemma A.7). By Theorems 4.2 and A.8 and our setting of l, we have that for any v 2 B: 1 {(1 "/2)Medp Median(| jv|) (1 + "/2)Medp} 0.95l ; 1 δ 4|B|. Therefore, the above condition holds for all v 2 B with probability at least 1 δ/4. Also, for a fixed j 2 [l] Lemma 4.5 yields P {k jk F } 0.975. Thus, by a Chernoff bound, with probability at least 1 exp( (l)) = 1 δ/4, at least 0.95l of the j have k jk F . Thus by a union bound: 1 {(1 "/2)Medp Median(| jv|) (1 + "/2)Medp \ k jk F } 0.9l. with probability at least 1 δ/2. We condition on this event and now, extend from B to the whole p ball. Consider any kxkp = 1. From the definition of B, there exists v 2 B such that kx vk2 γ. Let J be defined as: J = {j : (1 "/2)Medp Median(| jv|) (1 + "/2)Medp and k jk F } . From the previous discussion, we have |J | 0.9l. For j 2 J : k jx jvk1 k jx jvk2 = k j(x v)k2 k jk F kx vk2 " from our definition of γ and the bound on k jk F . Therefore, we have: |Median(| x|) Median(| v|)| " From this, we may conclude that for all j 2 J : (1 ")Medp Median(| jx|) (1 + ")Medp. Since x is an arbitrary vector in Sd p and |J | 0.9l, the statement of the lemma follows. We prove the correctness of Algorithm 2 assuming that { j}l j=1 are (", p)-representative. Lemma 4.6. Let 0 < " and δ 2 (0, 1). Then, Algorithm 2 when given as input any query point q 2 Rd , D = { j, { jxi}n j=1 where { j}l j=1 are (", p)-representative, " and δ, outputs distance estimates { di}n i=1 satisfying: 8i 2 [n] : (1 ")kq xikp di (1 + ")kq xikp with probability at least 1 δ/2. Proof. Let i 2 [n] and Wk = 1{ di,k 2 [(1 ")kq xikp, (1+")kq xikp]}. We have from the fact that { j}l j=1 are (", p)-representative and the scale invariance of Definition 4.3 that E[Wk] 0.9. Furthermore, Wk are independent for distinct k. Therefore, we have by Theorem A.8 that with probability at least 1 δ/(2n), Pr k=1 Wk 0.6r. From the definition of di, di satisfies the desired accuracy requirements when Pr k=1 Wk 0.6r and hence, with probability at least 1 δ/(2n). By a union bound over all i 2 [n], the conclusion of the lemma follows. Finally, we analyze the runtimes of Algorithms 1 and 2 where MM(a, b, c) is the runtime to multiply an a b matrix with a b c matrix. Note MM(a, b, c) = O(abc), but is in fact lower due to the existence of fast rectangular matrix multiplication algorithms [GU18]; since the precise bound depends on a case analysis of the relationship between a, b, and c, we do not simplify the bound beyond simply stating MM(a, b, c) since it is orthogonal to our focus. Lemma 4.7. The query time of Algorithm 2 is O((n+d) log(1/δ)/"2), and for Algorithm 1 the space is O((n + d)d log(1/δ)/"2) and pre-processing time is O(MM(" 2(d + log(1/δ)) log(d/"), d, n)) (which is naively O(nd(d + log(1/δ))/"2)). Proof. The space required to store the matrices { j}l j=1 is O(mld) and the space required to store the projections jxi for all i 2 [n], j 2 [l] is O(nml). For our settings of m, l, the space complexity of the algorithms follows. The query time follows from the time required to compute jkq for k 2 [r] with r = O(log n/δ), the n median computations in Algorithm 2 and our setting of m. For the pre-processing time, it takes O(mdl) = O(" 2d(d + log(1/δ))) time to generate all the j. Then we have to multiply jxi for all j 2 [l], i 2 [n]. Naively this would take time O(nlmd) = O(" 2nd(d+log(1/δ))). This can be improved though using fast matrix multiplication. If we organize the xi as columns of a d n matrix A, and stack the j row-wise to form a matrix 2 Rml d, then we wish to compute A, which we can do in MM(ml, d, n) time. Remark 4.8. In the case p = 2 one can instead use the Count Sketch instead of Indyk s p-stable sketch, which supports multiplying x in O(d) time instead of O(d/"2) [CCF04, TZ12]. Thus one could improve the ADE query time in Euclidean space to O(d + n/"2), i.e. the 1/"2 term need not multiply d. Since for the Count Sketch matrix, one has k k F d with probability 1, the same argument as above allows one to establish (", 2)-representativeness for Count Sketch matrices as well. It may also be possible to improve query time similarly for 0 < p < 2 using [KNPW11], though we do not do so in the present work. We now assemble our results to prove Theorem 4.1. The proof of Theorem 4.1 follows by using Algorithm 1 to construct our adaptive data structure, D, and Algorithm 2 to answer any query, q. The correctness guarantees follow from Lemmas 4.4 and 4.6 and the runtime and space complexity guarantees follow from Lemma 4.7. This concludes the proof of the theorem. 5 Experimental Evaluation In this section, we provide empirical evidence of the efficacy of our scheme. We have implemented both the vanilla Johnson-Lindenstrauss (JL) approach to distance estimation and our own along with an attack designed to compromise the correctness of the JL approach. Recall that in the JL approach, one first selects a matrix 2 Rk d with k = (" 2 log n) whose entries have been drawn from a sub-gaussian distribution with variance 1/k. Given a query point q, the distance to xi is approximated by computing k (q xi)k. We now describe our evaluation setup starting with the description of the attack. Our Attack: The attack we describe can be carried out for any database of at least two points; for the sake of simplicity, we describe our attack applied to the database of three points { e1, 0, e1} where e1 is the 1st standard basis vector. Now, consider the set S defined as follows: S := {x : k (x + e1)k k (x e1)k} = {x : hx, >e1i 0}. When is drawn from say a gaussian distribution as in the JL-approach, the vector y = > e1, with high probability, has length ( d/k) while y1 1. Therefore, when k d, the overlap of y with e1 is small (that is, hy, e1i/kyk is small). Conditioned on this high probability event, we sample a sequence of iid random vectors {zi}r i=1 s N(0, I) and compute z 2 Rd defined as: ( 1)Wizi where Wi = 1 {k (zi e1)k k (zi + e1)k} . (1) (a) Johnson-Lindenstrauss Adaptive vs Random (b) ADE with Adaptive Inputs Figure 1: Subfigure 1a illustrates the impact of adaptivity on the performance of the JL approach to distance estimation and contrasts its performance to what one would obtain if the inputs were random. In contrast, Subfigure 1b shows that the ADE approach described in this paper is unaffected by the attack described here. Through simple concentration arguments, z can be shown to be a good approximation of y (in terms of angular distance) and noticing that k yk is (d/k), we get that k zk ( d/k)kzk so that z makes a good adversarial query. Note that the above attack can be implemented solely with access to two points from the dataset and the values k (q e1)k and k (q + e1)k. Perhaps even more distressingly, the attack consists of a series of random inputs and concludes with a single adaptive choice. That is, the JL approach to distance estimation can be broken with a single round of adaptivity. In Figure 1, we illustrate the results of our attack on the JL sketch as well as an implementation of our algorithm when d = 5000 and k = 250 (for computational reasons, we chose a much smaller value of l = 200 to implement our data structure). To observe how the performance of the JL approach degrades with the number of rounds, we plotted the reported length of z as in Eq. (1) for r ranging from 1 to 5000. Furthermore, we compare this to the results that one would obtain if the inputs to the sketch were random points in Rd as opposed to adversarial ones. From Subfigure 1a, the performance of the JL approach drops drastically as as soon as a few hundred random queries are made which is significantly smaller than the ambient dimension. In contrast, Subfigure 1b shows that our ADE data structure is unaffected by the previously described attack corroborating our theoretical analysis. 6 Conclusion In this paper, we studied the problem of adaptive distance estimation where one is required to estimate the distance between a sequence of possibly adversarially chosen query points and the points in a dataset. For the aforementioned problem, we devised algorithms for all p norms with 0 < p 2 with nearly optimal query times and whose space complexities are nearly optimal. Prior to our work, the only previous result with comparable guarantees is an algorithm for the Euclidean case which only returns one near neighbor [Kle97] and does not estimate all distances. Along the way, we devised a novel framework for building adaptive data structures from non-adaptive ones and leveraged recent results from heavy tailed estimation for one analysis. We now present some open questions: 1. Our construction can be more broadly viewed as a specific instance of ensemble learning [Die00]. Starting with the influential work of [Bre96, Bre01], ensemble methods have been a mainstay in practical machine learning techniques. Indeed, the matrices stored in our ensemble have O(" 2) rows while using a single large matrix would require a model with O(d) rows. Are there other machine learning tasks for which such trade-offs can be quantified? 2. The main drawback our results is the time taken to compute the data structure, O(nd2) (this could be improved using fast rectangular matrix multiplication, but would still be !(nd), i.e. superlinear). One question is thus whether nearly linear time pre-processing is possible. 7 Broader Impact As a theoretical contribution, we do not see our work as having any foreseeable societal implications. While we do provide theoretical insights towards designing more resilient machine learning algorithms, it is difficult to gauge the downstream effects of incorporating such insights into practical algorithms and these effects may heavily rely on context. Acknowledgments and Disclosure of Funding J. Nelson is supported by NSF award CCF-1951384, ONR grant N00014-18-1-2562, ONR DORECG award N00014-17-1-2127, and a Google Faculty Research Award. Y. Cherapanamjeri is supported by a Microsoft Research BAIR Commons Research Grant. The authors would like to thank Sam Hopkins, Sidhanth Mohanty, Nilesh Tripuraneni and Tijana Zrnic for helpful comments in the course of this project and in the preparation of this manuscript. [Ahl17] Thomas Dybdahl Ahle. Optimal Las Vegas locality sensitive data structures. In Chris Umans, editor, 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 938 949. IEEE Computer Society, 2017. [AK17] Noga Alon and Bo az Klartag. Optimal compression of approximate inner products and dimension reduction. In Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 639 650, 2017. [Alt92] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. Amer. Statist., 46(3):175 185, 1992. [AMS97] Christopher G. Atkeson, Andrew W. Moore, and Stefan Schaal. Locally weighted learning. Artif. Intell. Rev., 11(1-5):11 73, 1997. [AMS99] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci., 58(1):137 147, 1999. [BCM+17] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. Co RR, abs/1708.06131, 2017. [BEJWY20] Omri Ben-Eliezer, Rajesh Jayaram, David P. Woodruff, and Eylon Yogev. A frame- work for adversarially robust streaming algorithms. In Proceedings of the 39th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), 2020. [BLM13] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities. Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword by Michel Ledoux. [BM01] Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In Doheon Lee, Mario Schkolnick, Foster J. Provost, and Ramakrishnan Srikant, editors, Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, San Francisco, CA, USA, August 26-29, 2001, pages 245 250. ACM, 2001. [BNS+16] Raef Bassily, Kobbi Nissim, Adam D. Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. Algorithmic stability for adaptive data analysis. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 1046 1059. ACM, 2016. [Bre96] Leo Breiman. Bagging predictors. Mach. Learn., 24(2):123 140, 1996. [Bre01] Leo Breiman. Random forests. Mach. Learn., 45(1):5 32, 2001. [CBK09] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM Comput. Surv., 41(3):15:1 15:58, 2009. [CCF04] Moses Charikar, Kevin C. Chen, and Martin Farach-Colton. Finding frequent items in data streams. Theor. Comput. Sci., 312(1):3 15, 2004. [Cla88] Kenneth L. Clarkson. A randomized algorithm for closest-point queries. SIAM J. Comput., 17(4):830 847, 1988. [CMS76] J. M. Chambers, C. L. Mallows, and B. W. Stuck. A method for simulating stable random variables. J. Amer. Statist. Assoc., 71(354):340 344, 1976. [DFH+15a] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. Generalization in adaptive data analysis and holdout reuse. In Proceedings of the 28th Annual Conference on Advances in Neural Information Processing Systems (NIPS), pages 2350 2358, 2015. [DFH+15b] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. The reusable holdout: preserving validity in adaptive data analysis. Science, 349(6248):636 638, 2015. [DFH+15c] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the 47th Annual ACM on Symposium on Theory of Computing (STOC), pages 117 126, 2015. [Die00] Thomas G. Dietterich. Ensemble methods in machine learning. In Josef Kittler and Fabio Roli, editors, Multiple Classifier Systems, First International Workshop, MCS 2000, Cagliari, Italy, June 21-23, 2000, Proceedings, volume 1857 of Lecture Notes in Computer Science, pages 1 15. Springer, 2000. [DIIM04] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Jack Snoeyink and Jean-Daniel Boissonnat, editors, Proceedings of the 20th ACM Symposium on Computational Geometry, Brooklyn, New York, USA, June 8-11, 2004, pages 253 262. ACM, 2004. [DSSU17] Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. Exposed! a survey of attacks on private data. Annual Review of Statistics and Its Application, 4(1):61 84, 2017. [GHR+12] Anna C. Gilbert, Brett Hemenway, Atri Rudra, Martin J. Strauss, and Mary Wootters. Recovering simple signals. In 2012 Information Theory and Applications Workshop, ITA 2012, San Diego, CA, USA, February 5-10, 2012, pages 382 391. IEEE, 2012. [GHS+12] Anna C. Gilbert, Brett Hemenway, Martin J. Strauss, David P. Woodruff, and Mary Wootters. Reusable low-error compressive sampling schemes through privacy. In IEEE Statistical Signal Processing Workshop, SSP 2012, Ann Arbor, MI, USA, August 5-8, 2012, pages 536 539. IEEE, 2012. [GSS15] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann Le Cun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. [GU18] Francois Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor. In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1029 1046, 2018. [HMPW16] Moritz Hardt, Nimrod Megiddo, Christos H. Papadimitriou, and Mary Wootters. Strate- gic classification. In Madhu Sudan, editor, Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, Cambridge, MA, USA, January 14-16, 2016, pages 111 122. ACM, 2016. [HSS08] Thomas Hofmann, Bernhard Schölkopf, and Alexander J. Smola. Kernel methods in machine learning. Ann. Statist., 36(3):1171 1220, 2008. [HW13] Moritz Hardt and David P. Woodruff. How robust are linear sketches to adaptive inputs? In Dan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, Symposium on Theory of Computing Conference, STOC 13, Palo Alto, CA, USA, June 1-4, 2013, pages 121 130. ACM, 2013. [IM98] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In Jeffrey Scott Vitter, editor, Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998, pages 604 613. ACM, 1998. [Ind06] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. J. ACM, 53(3):307 323, 2006. [IW18] Piotr Indyk and Tal Wagner. Approximate nearest neighbors in limited space. In Proceedings of the Conference On Learning Theory (COLT), pages 2012 2036, 2018. [JL84] William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), volume 26 of Contemp. Math., pages 189 206. Amer. Math. Soc., Providence, [Kle97] Jon M. Kleinberg. Two algorithms for nearest-neighbor search in high dimensions. In Frank Thomson Leighton and Peter W. Shor, editors, Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, El Paso, Texas, USA, May 4-6, 1997, pages 599 608. ACM, 1997. [KNPW11] Daniel M. Kane, Jelani Nelson, Ely Porat, and David P. Woodruff. Fast moment estimation in data streams in optimal space. In Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC), pages 745 754, 2011. [KNW10] Daniel M. Kane, Jelani Nelson, and David P. Woodruff. On the exact space complexity of sketching and streaming small norms. In Proceedings of the 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1161 1178, 2010. [KOR00] Eyal Kushilevitz, Rafail Ostrovsky, and Yuval Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. SIAM J. Comput., 30(2):457 474, 2000. [LCLS17] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable ad- versarial examples and black-box attacks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open Review.net, 2017. [LM19] Gábor Lugosi and Shahar Mendelson. Near-optimal mean estimators with respect to general norms. Probab. Theory Related Fields, 175(3-4):957 973, 2019. [LT11] Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Classics in Mathematics. Springer-Verlag, Berlin, 2011. Isoperimetry and processes, Reprint of the 1991 edition. [Mei93] S. Meiser. Point location in arrangements of hyperplanes. Inform. and Comput., 106(2):286 303, 1993. [MNS11] Ilya Mironov, Moni Naor, and Gil Segev. Sketching in adversarial environments. SIAM J. Comput., 40(6):1845 1870, 2011. [MZ18] Shahar Mendelson and Nikitz Zhivotovskiy. Robust covariance estimation under L4-L2 norm equivalence. ar Xiv preprint ar Xiv:1809.10462, 2018. [Nol18] J. P. Nolan. Stable Distributions - Models for Heavy Tailed Data. Birkhauser, Boston, 2018. In progress, Chapter 1 online at http://fs2.american.edu/jpnolan/www/stable/stable.html. [Pag18] Rasmus Pagh. Coveringlsh: Locality-sensitive hashing without false negatives. ACM Trans. Algorithms, 14(3):29:1 29:17, 2018. [PMG16] Nicolas Papernot, Patrick D. Mc Daniel, and Ian J. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Co RR, abs/1605.07277, 2016. [PMG+17] Nicolas Papernot, Patrick D. Mc Daniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Ramesh Karri, Ozgur Sinanoglu, Ahmad-Reza Sadeghi, and Xun Yi, editors, Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Asia CCS 2017, Abu Dhabi, United Arab Emirates, April 2-6, 2017, pages 506 519. ACM, 2017. [SDI08] Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. Nearest-neighbor methods in learning and vision. IEEE Trans. Neural Networks, 19(2):377, 2008. [Sim96] Jeffrey S. Simonoff. Smoothing methods in statistics. Springer Series in Statistics. Springer-Verlag, New York, 1996. [SW17] Piotr Sankowski and Piotr Wygocki. Approximate nearest neighbors search without false negatives for 2 for c > plog log n. In Yoshio Okamoto and Takeshi Tokuyama, editors, 28th International Symposium on Algorithms and Computation, ISAAC 2017, December 9-12, 2017, Phuket, Thailand, volume 92 of LIPIcs, pages 63:1 63:12. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. [TZ12] Mikkel Thorup and Yin Zhang. Tabulation-based 5-independent hashing with applica- tions to linear probing and second moment estimation. SIAM J. Comput., 41(2):293 331, 2012. [Ver18] Roman Vershynin. High-dimensional probability, volume 47 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2018. An introduction with applications in data science, With a foreword by Sara van de Geer. [Wei19] Alexander Wei. Optimal las vegas approximate near neighbors in p. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1794 1813, 2019. [WJ95] M. P. Wand and M. C. Jones. Kernel smoothing, volume 60 of Monographs on Statistics and Applied Probability. Chapman and Hall, Ltd., London, 1995. [YHZL19] Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Networks Learn. Syst., 30(9):2805 2824, 2019. [Zol86] V. M. Zolotarev. One-dimensional stable distributions, volume 65 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1986. Translated from the Russian by H. H. Mc Faden, Translation edited by Ben Silver.