# differentially_private_bayesian_optimization__6f0f5b94.pdf Differentially Private Bayesian Optimization Matt J. Kusner MKUSNER@WUSTL.EDU Jacob R. Gardner GARDNER.JAKE@WUSTL.EDU Roman Garnett GARNETT@WUSTL.EDU Kilian Q. Weinberger KILIAN@WUSTL.EDU Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO 63130 Bayesian optimization is a powerful tool for finetuning the hyper-parameters of a wide variety of machine learning models. The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems. As machine learning becomes commonplace, Bayesian optimization becomes an attractive method for practitioners to automate the process of classifier hyper-parameter tuning. A key observation is that the data used for tuning models in these settings is often sensitive. Certain data such as genetic predisposition, personal email statistics, and car accident history, if not properly private, may be at risk of being inferred from Bayesian optimization outputs. To address this, we introduce methods for releasing the best hyper-parameters and classifier accuracy privately. Leveraging the strong theoretical guarantees of differential privacy and known Bayesian optimization convergence bounds, we prove that under a GP assumption these private quantities are often near-optimal. Finally, even if this assumption is not satisfied, we can use different smoothness guarantees to protect privacy. 1. Introduction Machine learning is increasingly used in application areas with sensitive data. For example, hospitals use machine learning to predict if a patient is likely to be readmitted soon (Yu et al., 2013), webmail providers classify spam emails from non-spam (Weinberger et al., 2009), and insurance providers forecast the extent of bodily injury in car crashes (Chong et al., 2005). Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). In these scenarios data cannot be shared legally, but companies and hospitals may want to share hyper-parameters and validation accuracies through publications or other means. However, data-holders must be careful, as even a small amount of information can compromise privacy. Which hyper-parameter setting yields the highest accuracy can reveal sensitive information about individuals in the validation or training data set, reminiscent of reconstruction attacks described by Dwork & Roth (2013) and Dinur & Nissim (2003). For example, imagine updated hyperparameters are released right after a prominent public figure is admitted to a hospital. If a hyper-parameter is known to correlate strongly with a particular disease the patient is suspected to have, an attacker could make a direct correlation between the hyper-parameter value and the individual. To prevent these sorts of attacks, we develop a set of algorithms that automatically fine-tune the hyper-parameters of a machine learning algorithm while provably preserving differential privacy (Dwork et al., 2006b). Our approach leverages recent results on Bayesian optimization (Snoek et al., 2012; Hutter et al., 2011; Bergstra & Bengio, 2012; Gardner et al., 2014), training a Gaussian process (GP) (Rasmussen & Williams, 2006) to accurately predict and maximize the validation gain of hyper-parameter settings. We show that the GP model in Bayesian optimization allows us to release noisy final hyper-parameter settings to protect against aforementioned privacy attacks, while only sacrificing a tiny, bounded amount of validation gain. Our privacy guarantees hold for releasing the best hyperparameters and best validation gain. Specifically our contributions are as follows: 1. We derive, to the best of our knowledge, the first framework for Bayesian optimization with differential privacy guarantees, with/without oberservation noise, 2. We show that even if our validation gain is not drawn from a Gaussian process, we can guarantee differential privacy under different smoothness assumptions. We begin with background on Bayesian optimization and differential privacy we will use to prove our guarantees. Differentially Private Bayesian Optimization 2. Background In general, our aim will be to protect the privacy of a validation dataset of sensitive records V X (where X is the collection of all possible records) when the results of Bayesian optimization depends on V. Bayesian optimization. Our goal is to maximize an unknown function f V : Λ R that depends on some validation dataset V X: max λ Λ f V(λ). (1) It is important to point out that all of our results hold for the general setting of eq. (1), but throughout the paper, we use the vocabulary of a common application: that of machine learning hyper-parameter tuning. In this case f V(λ) is the gain of a learning algorithm evaluated on validation dataset V that was trained with hyper-parameters λ Λ Rd. As evaluating f V is expensive (e.g., each evaluation requires training a learning algorithm), Bayesian optimization gives a procedure for selecting a small number of locations to sample f V: [λ1, . . . , λT ] = λT Rd T . Specifically, given a current sample λt, we observe a validation gain vt such that vt = f V(λt) + αt, where αt N(0, σ2) is Gaussian noise with possibly non-zero variance σ2. Then, given vt and previously observed values v1, . . . , vt 1, Bayesian optimization updates its belief of f V and samples a new hyper-parameter λt+1. Each step of the optimization proceeds in this way. To decide which hyper-parameter to sample next, Bayesian optimization places a prior distribution over f V and updates it after every (possibly noisy) function observation. One popular prior distribution over functions is the Gaussian process GP µ( ), k( , ) (Rasmussen & Williams, 2006), parameterized by a mean function µ( ) (we set µ = 0, w.l.o.g.) and a kernel covariance function k( , ). Functions drawn from a Gaussian process have the property that any finite set of values of the function are normally distributed. Additionally, given samples λT = [λ1, . . . , λT ] and observations v T = [v1, . . . , v T ], the GP posterior mean and variance has a closed form: µT (λ) = k(λ, λT )(KT + σ2I) 1v T k T (λ, λ ) = k(λ, λ ) k(λ, λT )(KT + σ2I) 1k(λT , λ ) σ2 T (λ) = k T (λ, λ), (2) where k(λ, λT ) R1 T is evaluated element-wise on each of the T columns of λT , and λ Λ is any hyper-parameter. As well, KT = k(λT , λT ) RT T is evaluated on all hyper-parameter pairs from λT . As more samples are observed, the mean function µT (λ) approaches f V(λ). One well-known method to select hyper-parameters λ maximizes the upper-confidence bound (UCB) of the posterior GP model of f V (Auer et al., 2002; Srinivas et al., 2010): λt+1 arg max λ Λ µt(λ) + p βt+1σt(λ), (3) where βT +1 is a parameter that trades off the exploitation of maximizing µt(λ) and the exploration of maximizing σt(λ). Srinivas et al. (2010) proved that given certain assumptions on f V and fixed, non-zero observation noise (σ2 > 0), selecting hyper-parameters λ to maximize eq. (3) is a no-regret Bayesian optimization procedure: lim T 1 T PT t=1 f V(λ ) f V(λt) = 0, where f V(λ ) is the maximizer of eq. (1). For the no-noise setting, de Freitas et al. (2012) give a UCB-based no-regret algorithm. Contributions. Alongside maximizing f V, we would like to guarantee that if f V depends on (sensitive) validation data, we can release information about f V so that the data V remains private. Specifically, we may wish to release (a) our best guess ˆλ arg maxt T f V(λt) of the true (unknown) maximizer λ and (b) our best guess f V(ˆλ) of the true (also unknown) maximum objective f V(λ ). The primary question this work aims to answer is: how can we release private versions of ˆλ and f V(ˆλ) that are close to their true values, or better, the values λ and f V(λ )? We give two answers to these questions. The first will make a Gaussian process assumption on f V, which we describe immediately below. The second, described in Section 5, will utilize Lipschitz and convexity assumptions to guarantee privacy in the event the GP assumption does not hold. Setting. For our first answer to this question, let us define a Gaussian process over hyper-parameters λ, λ Λ and datasets V, V X as follows: GP 0, k1(V, V ) k2(λ, λ ) . A prior of this form is known as a multi-task Gaussian process (Bonilla et al., 2008). Many choices for k1 and k2 are possible. The function k1(V, V ) defines a set kernel (e.g., a function of the number of records that differ between V and V ). For k2, we focus on either the squared exponential: k2(λ, λ ) = exp λ λ 2 2/(2ℓ2) or Mat ern kernels: (e.g., for ν = 5/2, k2(λ, λ ) = (1 + 5r/ℓ+ (5r2)/(3ℓ2)) exp( 5r/ℓ), for r = λ λ 2), for a fixed ℓ, as they have known bounds on the maximum information gain (Srinivas et al., 2010). Note that as defined, the kernel k2 is normalized (i.e., k2(λ, λ) = 1). Assumption 1. We have a problem of type (1), where all possible dataset functions [f1, . . . , f2|X|] are GP distributed GP 0, k1(V, V ) k2(λ, λ ) for known kernels k1, k2, for all V, V X and λ, λ Λ, where |Λ|< . Similar Gaussian process assumptions have been made in previous work (Srinivas et al., 2010). For a result in the nonoise observation setting, we will make use of the assumptions of de Freitas et al. (2012) for our privacy guarantees, as described in Section 4. Differentially Private Bayesian Optimization 2.1. Differential Privacy One of the most widely accepted frameworks for private data release is differential privacy (Dwork et al., 2006b), which has been shown to be robust to a variety of privacy attacks (Sweeney, 1997; Dinur & Nissim, 2003; Ganta et al., 2008; Narayanan & Shmatikov, 2008). Given an algorithm A that outputs a value λ when run on dataset V, the goal of differential privacy is to hide the effect of a small change in V on the output of A. Equivalently, an attacker should not be able to tell if a single private record was changed in V just by looking at the output of A. If two datasets V, V differ in the value of a single individual, we will refer to them as neighboring datasets. Note that any non-trivial algorithm (i.e., an algorithm A that outputs different values on V and V for some pair V, V X) must include some amount of randomness to guarantee such a change in V is unobservable in the output λ of A (Dwork & Roth, 2013). The level of privacy we wish to guarantee decides the amount of randomness we need to add to λ (better privacy requires increased randomness). Formally, the definition of differential privacy is stated below. Definition 1. A randomized algorithm A is (ϵ, δ)- differentially private for ϵ, δ 0 if for all λ Range(A) and for all neighboring datasets V, V (i.e., such that V and V differ in the value of one record) we have that Pr A(V) = λ eϵ Pr A(V ) = λ + δ. (4) The parameters ϵ, δ guarantee how private A is; the smaller, the more private. The maximum privacy is ϵ = δ = 0 in which case eq. (4) holds with equality. This can be seen by the fact that V and V can be swapped in the definition, and thus the inequality holds in both directions. If δ = 0, we say the algorithm is simply ϵ-differentially private. For a survey on differential privacy we refer the interested reader to Dwork & Roth (2013). There are two popular methods for making an algorithm ϵdifferentially private: (a) the Laplace mechanism (Dwork et al., 2006b), in which we add random noise to λ and (b) the exponential mechanism (Mc Sherry & Talwar, 2007), which draws a random output λ such that λ λ. For each mechanism we must define an intermediate quantity called the global sensitivity describing how much A changes when V changes. Definition 2. (Laplace mechanism) The global sensitivity of an algorithm A over all neighboring datasets V, V (i.e., V, V differ by the value of one record) is A max V,V X A(V) A(V ) 1. (Exponential mechanism) The global sensitivity of a function q: X Λ R over all neighboring datasets V, V is q max V,V X λ Λ q(V, λ) q(V , λ) 1. The Laplace mechanism hides the output of A by perturbing its output with some amount of random noise. Definition 3. Given a dataset V and an algorithm A, the Laplace mechanism returns A(V) + ω, where ω is a noise variable drawn from Lap( A/ϵ), the Laplace distribution with scale parameter A/ϵ (and location parameter 0). The exponential mechanism draws a slightly different λ that is close to λ, the output of A. Definition 4. Given a dataset V and an algorithm A(V) = arg maxλ Λ q(V, λ), the exponential mechanism returns λ, where λ is drawn from the distribution 1 Z exp ϵq(V, λ)/(2 q) , and Z is a normalizing constant. Given Λ, a possible set of hyper-parameters, we derive methods for privately releasing the best hyper-parameters and the best function values f V, approximately solving eq. (1). We first address the setting with observation noise (σ2 > 0) in eq. (2) and then describe small modifications for the no-noise setting. For each setting we use the UCB sampling technique in eq. (3) to derive our private results. 3. With observation noise In general cases of Bayesian optimization, observation noise occurs in a variety of real-world modeling settings such as sensor measurements (Krause et al., 2008). In hyper-parameter tuning, noise in the validation gain may be as a result of noisy validation or training features. Algorithm 1 Private Bayesian Opt. (noisy observations) 1: Input: V; Λ Rd; T; (ϵ, δ); σ2 V,0; γT 2: µV,0 = 0 3: for t = 1 . . . T do 4: βt =2 log(|Λ|t2π2/(3δ)) 5: λt arg maxλ Λ µV,t 1(λ) + βtσV,t 1(λ) 6: Observe validation gain v V,t, given λt 7: Update µV,t and σ2 V,t according to (2) 8: end for 9: c=2 q 1 k(V, V ) log 3|Λ|/δ 8 log(3/δ) 11: C1 = 8/ log(1 + σ 2) 12: Draw λ Λ w.p. Pr[λ] exp ϵµV,T (λ) 13: v =maxt T v V,t 14: Draw θ Lap h C1βT γT 15: v = v + θ 16: Return: λ, v In the sections that follow, although the quantities f, µ, σ, v all depend on the validation dataset V, for notational simplicity we will occasionally omit the subscript V. Similarly, for V we will often write: f , µ , σ 2, v . Differentially Private Bayesian Optimization 3.1. Private near-maximum hyper-parameters In this section we guarantee that releasing λ in Algorithm 1 is private (Theorem 1) and has high utility (Theorem 2). Our proof strategy is as follows: we will first demonstrate the global sensitivity of µT (λ) with probability at least 1 δ. Then we will show that releasing λ via the exponential mechanism is (ϵ, δ)-differentially private. Finally, we show that µT ( λ) is close to maxλ Λ µT (λ). Global sensitivity. As a first step we bound the global sensitivity of µT (λ) as follows: Theorem 1. Given Assumption 1, for any two neighboring datasets V, V and for all λ Λ with probability at least 1 δ there is an upper bound on the global sensitivity (in the exponential mechanism sense) of µT : |µ T (λ) µT (λ)| 2 p βT +1 + σ1 p 2 log (3|Λ|/δ), 2 1 k1(V, V ) , βt =2 log |Λ|t2π2/(3δ) . Proof. Note that, by applying the triangle inequality twice, for all λ Λ, |µ T (λ) µT (λ)| |µ T (λ) f (λ)| + |f (λ) µT (λ)| |µ T (λ) f (λ)| + |f (λ) f(λ)| + |f(λ) µT (λ)|. We can now bound each one of the terms in the summation on the right hand side (RHS) with probability at least δ 3. According to Srinivas et al. (2010), Lemma 5.1, we obtain |µ T (λ) f (λ)| p βT +1σ T (λ). The same can be applied to |f(λ) µT (λ)|. As σ T (λ) 1, because k(λ, λ) = 1, we can upper bound both terms by 2 p βT +1. In order to bound the remaining (middle) term on the RHS recall that for a random variable Z N(0, 1) we have: Pr |Z|>γ e γ2/2. For variables Z1, . . . Zn N(0, 1), we have, by the union bound, that Pr i, |Zi| γ 1 ne γ2/2 3. If we set Z = |f(λ) f (λ)| σ1 and n = |Λ|, we obtain 2 log 3|Λ|/δ , which completes the proof. We remark that all of the quantities in Theorem 1 are either given or selected by the modeler (e.g, δ, T). Given this upper bound we can apply the exponential mechanism to release λ privately, as per Definition 1: Corollary 1. Let A(V) denote Algorithm 1 applied on dataset V. Given Assumption 1, λ is (ϵ, δ)-differentially private, i.e., Pr A(V)= λ eϵ Pr A(V )= λ +δ, for any pair of neighboring datasets V, V . We leave the proof of Corollary 1 to the supplementary material. As described in (Srinivas et al., 2010), lines 3-8 (UCB) of Algorithm 1 is a no-regret Bayesian optimization procedure: lim T PT t=1(f(λ ) f(λt))/T = 0. Further, with high probability (as given by the Gaussian tail probability bounds) for selected hyperparameters αi {α1, . . . , αT } we have that µT (αi) f(λi). Therefore, it is reasonable to select λ = arg maxλ Λ µT (λ), to optimize (1). Mc Sherry & Talwar (2007) show that exponential mechanism (line 12) selects a noisy maximum as follows. Theorem 2. (Mc Sherry & Talwar, 2007) The exponential mechanism selects λ that has value µT ( λ) that is close to the maximum maxλ Λ µT (λ), maxλ Λ µT (λ) µT ( λ) 2 ϵ (log |Λ| + a), (5) w.p. 1 (δ + e a), where = 2 p βT +1 + c (for βT +1 and c defined as in Algorithm 1). Notice that as T increases, so does in the above inequality, as the distribution in line 12 becomes more uniform. However, because increases much more slowly than T (i.e. is asymptotically O( p log(T +1)2)), T should be selected as large as possible to achieve small average regret. 3.2. Private near-maximum validation gain In this section we demonstrate releasing the validation gain v in Algorithm 1 is private (Theorem 3) and that the noise we add to ensure privacy is bounded with high probability (Theorem 4). As in the previous section our approach will be to first derive the global sensitivity of the maximum v found by Algorithm 1. Then we show releasing v is (ϵ, δ)-differentially private via the Laplace mechanism. Surprisingly, we can also show that v is close to f(λ ). Global sensitivity. We bound the global sensitivity of the maximum v found with Bayesian optimization and UCB: Theorem 3. Given Assumption 1, and neighboring V, V , we have the following global sensitivity bound (in the Laplace mechanism sense) for the maximum v, w.p. 1 δ | max t T v t max t T vt| C1βT γT (for c, q, βT in Algorithm 1) where the maximum GP information gain γT is bounded above for the squared exponential and Mat ern kernels (Srinivas et al., 2010). Proof. For notational simplicity let us denote the regret term as Ω C1TβT γT . Then from Theorem 1 in Srinivas et al. (2010) we have that Ω T f(λ ) 1 t=1 f(λt) f(λ ) max t T f(λt). (6) This implies f(λ ) maxt T f(λt) + Ω T with probability at least 1 δ 3 (with appropriate choice of βT ). Recall that in the proof of Theorem 1 we showed that |f(λ) f (λ)| c with probability at least 1 δ Differentially Private Bayesian Optimization c given in Algorithm 1). This along with the above expression imply the following two sets of inequalities with probability greater than 1 2δ f (λ ) c f(λ ) < max t T f(λt) + Ω f(λ ) c f (λ ) < max t T f (λt) + Ω These, in turn, imply the two sets of inequalities: max t T f (λt) f (λ ) < max t T f(λt) + Ω max t T f(λt) f(λ ) < max t T f (λt) + Ω This implies | maxt T f (λt) maxt T f(λt)| Ω T + c. That is, the global sensitivity of maxt T f(λt) is bounded. Given the sensitivity of the maximum f, we can readily derive the sensitivity of maximum v. First note that we can use the triangle inequality to derive | max t T v t max t T vt| | max t T vt max t T f(λt)| + | max t T v t max t T f (λt)| + | max t T f (λt) max t T f(λt)|. We can immediately bound the final term on the right hand side. For the first term, let tv,max = arg maxt T vt be the index of the best observed validation gain. Let tf,max = arg maxt T f(λt) be the index of the best true validation gain. The first term is upper bounded by |α| max{|αtv,max|, |αtf,max|} (this can be easily seen by enumerating the cases: (a) tv,max = tf,max and (b) tv,max = tf,max). Similarly, |α | upper bounds the second term (defined in the same way). Let ˆα=max{α, α } so we have, | max t T v t max t T vt| Ω T + c + |2ˆα|. Although |ˆα| can be arbitrarily large, recall that for Z N(0, 1) we have: Pr[|Z| γ] 1 e γ2/2 1 δ 3. Therefore if we set Z = ˆα σ we have γ = p 2 log(3/δ). This implies that |2ˆα| σ p 8 log(3/δ) = q with probability at least 1 δ 3. Therefore, if Theorem 1 from Srinivas et al. (2010) and the bound on |f(λ) f (λ)| hold together with probability at least 1 2δ 3 as described above, the theorem follows directly. As in Theorem 1 each quantity in the above bound is given in Algorithm 1 (β, c, q), given in previous results (Srinivas et al., 2010) (γT , C1) or specified by the modeler (T, δ). Now that we have a bound on the sensitivity of the maximum v we will use the Laplace mechanism to prove our privacy guarantee (proof in supplementary material): Corollary 2. Let A(V) denote Algorithm 1 run on dataset V. Given Assumption 1, releasing v is (ϵ, δ)-differentially private, i.e., Pr[A(V)= v] eϵ Pr[A(V )= v]+δ. Further, as the Laplace distribution has exponential tails, the noise we add to obtain v is small and shrinks as T increases. In fact, we can show that this noisy validation error is close to the optimal noise-free validation error. Theorem 4. Given Assumption 1 we have, | v f(λ )| σ p 8 log(1/δ) + Ω T + a Ω ϵT + c with probability at least 1 (δ+e a) for Ω= C1TβT γT . Proof. Let Z be a Laplace random variable with scale parameter b and location parameter 0; Z Lap(b). Then Pr |Z| ab = 1 e a. Thus, in Algorithm 1, | v maxt T vt| ab for b = Ω ϵT + c ϵ with probability at least 1 e a. Let tf,max = arg maxt T f(λt). Further observe, ab max t T vt v (max t T f(λt) αtf,max) v T αtf,max v (7) where the second and third inequality follow from the proof of Theorem 3 (using the regret bound of Srinivas et al. (2010): Theorem 1). Note that the third inequality holds with probability greater than 1 δ 2 (given βt in Algorithm 1). The final inequality implies f(λ ) v αtf,max + Ω T + ab. Let tv,max =arg maxt T vt. Also note, ab v max t T vt v (max t T f(λt) + αtv,max) T αtv,max (8) This implies that f(λ ) v αtv,max Ω T ab. Thus we have that | v f(λ )| αtf,max + αtv,max + Ω T + ab. Finally, because αtf,max and αtv,max could be arbitrarily large we give a high probability upper bound on these quantities. Let ˆα max{αtf,max, αtv,max}. Again recall that for Z N(0, 1), we have the tail probability bound Pr Z γ] 1 (1/2)e γ2/2 1 δ 2. Therefore if we set Z = ˆα σ we arrive at γ = σ p 2 log(1/δ). This entails that 2ˆα σ p 8 log(1/δ). Therefore, our proposed bound in Theorem 4 holds. The right side of Theorem 4 becomes smaller as T increases, so T should be set as large as possible. We note that, because releasing either λ or v is (ϵ, δ)-differentially private, by Corollaries 1 and 2, releasing both private quantities in Algorithm 1 guarantees (2ϵ, 2δ)-differential privacy for validation dataset V. This is due to the composition properties of (ϵ, δ)-differential privacy (Dwork et al., 2006a) (in fact stronger composition results can be demonstrated, (Dwork & Roth, 2013)). Differentially Private Bayesian Optimization 4. Without observation noise In hyper-parameter tuning it may be reasonable to assume that we can observe function evaluations exactly: v V,t = f V(λt). First note that we can use the same algorithm to report the maximum λ in the no-noise setting. Indeed, Theorems 1 and 2 still hold. However, we cannot readily report a private maximum f as the information gain γT in Theorems 3 and 4 approaches infinity as σ2 0. Therefore, we extend results from the previous section to the exact observation case via the regret bounds of de Freitas et al. (2012). In this setting, the authors show that they can achieve exponentially-decreasing regret for each sample: f(λ ) f(λt) < O(e t/ log(t)). Therefore our strategy will be to select the last sample and add enough noise to cover a change in a validation record. Algorithm 2 demonstrates this strategy for privatizing the maximum f. Algorithm 2 Private Bayesian Opt. (noise free obs.) 1: Input: V; Λ Rd; T; (ϵ, δ); A, τ; assumptions on f V in de Freitas et al. (2012) 2: Observe noise-free samples: f V(λ1), . . . , f V(λT ) using method of de Freitas et al. (2012) 3: c = 2 q 1 k(V, V ) log(2|Λ|/δ) 4: Draw θ Lap h A ϵ e T τ (log T )d/4 + c 5: Return: f = f V(λT ) + θ 4.1. Private near-maximum validation gain We demonstrate that releasing f in Algorithm 2 is private (Corollary 3) and that a small amount of noise is added to make f private (Theorem 6). To do so, we derive the global sensitivity of f V(λT ) in Algorithm 2 independent of the maximum information gain γT via de Freitas et al. (2012). Then we prove releasing f is (ϵ, δ)-differentially private and that f is close to the optimal f(λ ). Global sensitivity. The following Theorem gives a bound on the global sensitivity of the maximum f. Theorem 5. Given Assumption 1 and the assumptions in Theorem 2 of de Freitas et al. (2012), for neighboring datasets V, V we have the following global sensitivity bound (in the Laplace mechanism sense), |f (λT ) f(λT )| Ae T τ (log T )d/4 + c w.p. at least 1 δ for c = 2 q 1 k(V, V ) log(2|Λ|/δ), given constants A and τ in de Freitas et al. (2012). We leave the proof to the supplementary material. Given this sensitivity, we may apply the Laplace mechanism to release f. Corollary 3. Let A(V) denote Algorithm 2 run on dataset V. Given Assumption 1 and that f satisfies the assumptions of de Freitas et al. (2012), f is (ϵ, δ)-differentially private, with respect to any neighboring dataset V , i.e., Pr A(V) = f eϵ Pr A(V ) = f + δ. Even though we must add noise to the maximum f we show that f is still close to the optimal f(λ ). Theorem 6. Given the assumptions of Corollary 3, we have the utility guarantee for Algorithm 2: | f f(λ )| Ω+ a Ω w.p. at least 1 (δ + e a) for Ω=Ae T τ (log T )d/4 . We prove Corollary 3 and Theorem 6 in the supplementary material. As in the noisy setting, selecting T as large as possible is the best strategy, as this reduces the right side of Theorem 6 (less Laplacian noise is required). We have thus shown that in noisy and noise-free settings it is possible to release private, high-quality hyper-parameter settings λ as well as private, near-optimal function evaluations v, f. 5. Without the GP assumption Even if our our true validation score f is not drawn from a Gaussian process (Assumption 1), we can still guarantee differential privacy for releasing its value after Bayesian optimization f BO = maxt T f(λt). In this section we describe a different functional assumption on f that also yields differentially private Bayesian optimization for the case of machine learning hyper-parameter tuning. Assume we have a (nonsensitive) training set T = {(xi, yi)}n i=1, which, given a hyperparameter λ produces a model w(λ) from the following optimization, wλ = arg min w Oλ(w) z }| { λ 2 w 2 2 + 1 i=1 ℓ(w, xi, yi), (9) The function ℓis a training loss function (e.g., logistic loss, hinge loss). Given a (sensitive) validation set V = {(xi, yi)}m i=1 X we would like to use Bayesian optimization to maximize a validation score f V. Assumption 2. Our true validation score f V is f V(wλ) = 1 i=1 g(wλ, xi, yi), where g( ) is a validation loss function that is L-Lipschitz in wλ (e.g., ramp loss, normalized sigmoid (Huang et al., 2014)). Additionally, the training model wλ is the minimizer of eq. (9) for a training loss ℓ( ) that is 1-Lipschitz in wλ and convex (e.g., logistic loss, hinge loss). Differentially Private Bayesian Optimization Algorithm 3 describes a procedure for privately releasing the best validation accuracy f BO given Assumption 2. Different from previous algorithms, we may run Bayesian optimization in Algorithm 3 with any acquisition function (e.g., expected improvement (Mockus et al., 1978), UCB) and privacy is still guaranteed. Algorithm 3 Private Bayesian Opt. (Lipschitz and convex) 1: Input: T size n; V size m; Λ; λmin; λmax; ϵ; T; L; d 2: Run Bayesian optimization for T timesteps, observing: f V(wλ1), . . . , f V(wλT ) for {λ1, . . . , λT }=ΛV,T Λ 3: f BO = maxt T f V(wλt) 4: g = max(x,y) X,w Rp g(w, x, y) 5: Draw θ Lap h 1 ϵ min{ g m , L mλmin } + (λmax λmin)L 6: Return: f L = f BO + θ Similar to Algorithms 1 and 2 we use the Laplace mechanism to mask the possible change in validation accuracy when a single record is changed in V. Different from the related work of Chaudhuri & Vinterbo (2013) changing V to neighboring dataset V may also lead to Bayesian optimization searching different hyper-parameters, ΛV,T vs. ΛV ,T . Therefore, we must bound the total global sensitivity of f with respect to V and λ, Definition 5. The total global sensitivity of f over all neighboring datasets V, V is f max V,V X λ,λ Λ |f V(wλ) f V (wλ )|. In the following theorem we demonstrate that we can bound the change in f for arbitrary λ < λ (w.l.o.g.). Theorem 7. Given Assumption 2, for neighboring V, V and arbitrary λ < λ we have that, |f V(wλ) f V (wλ )| (λ λ)L λ λ + min g m , L mλmin where L is the Lipschitz constant of f, m is the size of V, and g =max(x,y) X,w Rp g(w, x, y). Proof. Applying the triangle inequality yields |f V(wλ) f V (wλ )| |f V(wλ) f V(wλ )| + |f V(wλ ) f V (wλ )|. This second term is bounded by Chaudhuri & Vinterbo (2013) in the proof of Theorem 4. The only difference is, as we are not adding random noise to wλ we have that |f V(wλ ) f V (wλ )| min{g /m, L/(mλmin) }. To bound the first term, let Oλ(w) be the value of the objective in eq. (9) for a particular λ. Note that Oλ(w) and Oλ (w) are λ and λ -strongly convex. Define h(w) = Oλ (w) Oλ(w) = λ λ 2 w 2 2. (10) Further, define the minimizers wλ =arg minw Oλ(w) and wλ =arg minw[Oλ(w) + h(w)]. This implies that Oλ(wλ) = Oλ(wλ ) + h(wλ ) = 0. (11) Given that Oλ is λ-strongly convex (Shalev-Shwartz, 2007), and by the Cauchy-Schwartz inequality, λ wλ wλ 2 2 h Oλ(wλ) Oλ(wλ ) i h wλ wλ i h(wλ ) h wλ wλ i h(wλ ) 2 wλ wλ 2. Rearranging, 1 λ h(wλ ) 2 = λ λ 2 wλ 2 2 2 wλ wλ 2 (12) Now as wλ is the minimizer of Oλ we have, wλ 2 2 = 2 λ 1 n Pn i=1 ℓ(wλ , xi, yi) . Substituting this value of wλ into eq. (12) and noting that we can pull the positive constant term (λ λ)/2 out of the norm and drop the negative sign in the norm gives us 1 λ h(wλ ) 2 = λ λ n Pn i=1 ℓ(wλ , xi, yi) 2 = λ λ The last equality follows from the fact that the loss ℓis 1-Lipschitz by Assumption 2 and the triangle inequality. Thus, along with eq. (12), we have λ h(wλ ) 2 λ λ Finally, as f is L-Lipschitz in w, |f V(wλ) f V(wλ )| L wλ wλ 2 L λ λ Combining the result of Chaudhuri & Vinterbo (2013) with the above expression completes the proof. Given a finite set of hyperparameters Λ, in order to bound the total global sensitivity of f note that, by Theorem 7, f (λmax λmin)L λmaxλmin + min n g m , L mλmin as (λ λ)/(λ λ) is strictly increasing in λ and is strictly decreasing in λ. Given the total global sensitivity of f we can use the Laplace mechanism to hide the validation set. Corollary 4. Let A(V) denote Algorithm 3 applied on dataset V. Given Assumption 2, f L is ϵ-differentially private, i.e., Pr A(V) = f L eϵ Pr A(V ) = f L We leave the proof to the supplementary material. Further, by the exponential tails of the Laplace mechanism we have the following utility guarantee, Differentially Private Bayesian Optimization Theorem 8. Given the assumptions of Theorem 7, we have the following utility guarantee for f L w.r.t. f BO, | f L f BO| a h 1 ϵm min{g , L λmin } + (λmax λmin)L with probability at least 1 e a. Proof. This follows exactly from the tail bound on Laplace random variables, given in the proof of Theorem 6. One should set T as large as possible as Bayesian optimization may only possibly find better parameter settings when T is increased, and the noise added is independent of T. 6. Related work There has been much work towards differentially private convex optimization (Chaudhuri et al., 2011; Kifer et al., 2012; Duchi et al., 2013; Song et al., 2013; Jain & Thakurta, 2014; Bassily et al., 2014). The work of Bassily et al. (2014) established upper and lower bounds for the excess empirical risk of ϵ and (ϵ, δ)-differentially private algorithms for many settings including convex and strongly convex risk functions that may or may not be smooth. There is also related work towards private highdimensional regression, where the dimensions outnumber the number of instances (Kifer et al., 2012; Smith & Thakurta, 2013a). In such cases the Hessian becomes singular and so the loss is nonconvex. However, it is possible to use the restricted strong convexity of the loss in the regression case to guarantee privacy. Differential privacy has been shown to be achievable in online and interactive kernel learning settings (Jain et al., 2012; Smith & Thakurta, 2013b; Jain & Thakurta, 2013; Mishra & Thakurta, 2014). In general, non-private online algorithms are closest in spirit to the methods of Bayesian optimization. However, all of the previous work in differentially private online learning represents a dataset as a sequence of bandit arm pulls (the equivalent notion in Bayesian optimization is function evaluations f(λt)). Instead, we consider functions in which changing a single dataset entry possibly affects all future function evaluations. Closest to our work is that of Chaudhuri & Vinterbo (2013), who show that given a fixed set of hyperparameters which are always evaluated for any validation set, they can return a private version of the index of the best hyper-parameter, as well as a private model trained with that hyper-parameter. In one sense, their setting is more difficult as they guarantee privacy for the training and validation sets (we assume the training set is fixed). On the other hand, our selection strategy is more general in that, if the validation set changes, Bayesian optimization could search completely different hyper-parameters. Bayesian optimization, largely due to its principled han- dling of the exploration/exploitation trade-off of global, black-box function optimization, is quickly becoming the global optimization paradigm of choice. Alongside promising empirical results there is a wealth of recent work on convergence guarantees for Bayesian optimization, similar to those used in this work (Srinivas et al., 2010; de Freitas et al., 2012). Vazquez & Bect (2010) and Bull (2011) give regret bounds for optimization with the expected improvement acquisition function. Bayes Gap (Hoffman et al., 2014) gives a convergence guarantee for Bayesian optimization with budget constraints. Bayesian optimization has also been extended to multi-task optimization (Bardenet et al., 2013; Swersky et al., 2013), the parallel experiment setting (Azimi et al., 2012; Snoek et al., 2012), and to constrained optimization (Gardner et al., 2014). 7. Conclusion We have introduced methods for privately releasing the best hyper-parameters and validation accuracies in the case of exact and noisy observations. Our work makes use of the differential privacy framework, which has become commonplace in private machine learning (Dwork & Roth, 2013). We believe we are the first to demonstrate differentially private quantities in the setting of global optimization of expensive (possibly nonconvex) functions, through the lens of Bayesian optimization. One key future direction is to design techniques to release each sampled hyper-parameter and validation accuracy privately (during the run of Bayesian optimization). This requires analyzing how the maximum upper-confidence bound changes as the validation dataset changes. Another interesting direction is extending our guarantees in Sections 3 and 4 to other acquisition functions. For the case of machine learning hyper-parameter tuning our results are designed to guarantee privacy of the validation set only (it is equivalent to guarantee that the training set is never allowed to change). To simultaneously protect the privacy of the training set it may be possible to use techniques similar to the training stability results of Chaudhuri & Vinterbo (2013). Training stability could be guaranteed, for example, by assuming an additional training set kernel that bounds the effect of altering the training set on f. We leave developing these guarantees for future work. Acknowledgments KQW, MJK, and JRG are supported by NSF grants IIA1355406, IIS-1149882, EFRI-1137211. This work was also supported by Yahoo! Labs, where MJK was a research intern working on this project. The authors thank Abhradeep Thakurta and Avishek Saha for helpful guidance, discussions, and suggestions. Differentially Private Bayesian Optimization Auer, Peter, Cesa-Bianchi, Nicolo, and Fischer, Paul. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235 256, 2002. Azimi, Javad, Jalali, Ali, and Fern, Xiaoli Z. Hybrid batch bayesian optimization. In ICML 2012, pp. 1215 1222. ACM, 2012. Bardenet, R emi, Brendel, M aty as, K egl, Bal azs, and Sebag, Mich ele. Collaborative hyperparameter tuning. In ICML, 2013. Bassily, Raef, Smith, Adam, and Thakurta, Abhradeep. Private empirical risk minimization, revisited. ar Xiv preprint ar Xiv:1405.7085, 2014. Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. JMLR, 13:281 305, 2012. Bonilla, Edwin, Chai, Kian Ming, and Williams, Christopher. Multi-task gaussian process prediction. In NIPS, 2008. Bull, Adam D. Convergence rates of efficient global optimization algorithms. JMLR, 12:2879 2904, 2011. Chaudhuri, Kamalika and Vinterbo, Staal A. A stabilitybased validation procedure for differentially private machine learning. In Advances in Neural Information Processing Systems, pp. 2652 2660, 2013. Chaudhuri, Kamalika, Monteleoni, Claire, and Sarwate, Anand D. Differentially private empirical risk minimization. JMLR, 12:1069 1109, 2011. Chong, Miao M, Abraham, Ajith, and Paprzycki, Marcin. Traffic accident analysis using machine learning paradigms. Informatica (Slovenia), 29(1):89 98, 2005. Cortes, Corinna and Vapnik, Vladimir. Support-vector networks. Machine learning, 20(3):273 297, 1995. de Freitas, Nando, Smola, Alex, and Zoghi, Masrour. Exponential regret bounds for gaussian process bandits with deterministic observations. In ICML, 2012. Dinur, Irit and Nissim, Kobbi. Revealing information while preserving privacy. In Proceedings of the SIGMODSIGACT-SIGART symposium on principles of database systems, pp. 202 210. ACM, 2003. Duchi, John C, Jordan, Michael I, and Wainwright, Martin J. Local privacy and statistical minimax rates. In FOCS, pp. 429 438. IEEE, 2013. Dwork, Cynthia and Roth, Aaron. The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4):211 407, 2013. Dwork, Cynthia, Kenthapadi, Krishnaram, Mc Sherry, Frank, Mironov, Ilya, and Naor, Moni. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT 2006, pp. 486 503. Springer, 2006a. Dwork, Cynthia, Mc Sherry, Frank, Nissim, Kobbi, and Smith, Adam. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, pp. 265 284. Springer, 2006b. Ganta, Srivatsava Ranjit, Kasiviswanathan, Shiva Prasad, and Smith, Adam. Composition attacks and auxiliary information in data privacy. In KDD, pp. 265 273. ACM, 2008. Gardner, Jacob, Kusner, Matt, Xu, Zhixiang, Weinberger, Kilian, and Cunningham, John. Bayesian optimization with inequality constraints. In ICML, pp. 937 945, 2014. Hoffman, Matthew, Shahriari, Bobak, and de Freitas, Nando. On correlation and budget constraints in modelbased bandit optimization with application to automatic machine learning. In AISTATS, pp. 365 374, 2014. Huang, Xiaolin, Shi, Lei, and Suykens, Johan AK. Ramp loss linear programming support vector machine. The Journal of Machine Learning Research, 15(1):2185 2211, 2014. Hutter, Frank, Hoos, H. Holger, and Leyton-Brown, Kevin. Sequential model-based optimization for general algorithm configuration. In Learning and Intelligent Optimization, pp. 507 523. Springer, 2011. Jain, Prateek and Thakurta, Abhradeep. Differentially private learning with kernels. In ICML, pp. 118 126, 2013. Jain, Prateek and Thakurta, Abhradeep Guha. (near) dimension independent risk bounds for differentially private learning. In ICML, pp. 476 484, 2014. Jain, Prateek, Kothari, Pravesh, and Thakurta, Abhradeep. Differentially private online learning. COLT, 2012. Kifer, Daniel, Smith, Adam, and Thakurta, Abhradeep. Private convex empirical risk minimization and highdimensional regression. JMLR, 1:41, 2012. Krause, Andreas, Singh, Ajit, and Guestrin, Carlos. Nearoptimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. JMLR, 9: 235 284, 2008. Differentially Private Bayesian Optimization Mc Sherry, Frank and Talwar, Kunal. Mechanism design via differential privacy. In FOCS, pp. 94 103. IEEE, 2007. Mishra, Nikita and Thakurta, Abhradeep. Private stochastic multi-arm bandits: From theory to practice. In ICML Workshop on Learning, Security, and Privacy, 2014. Mockus, Jonas, Tiesis, Vytautas, and Zilinskas, Antanas. The application of bayesian methods for seeking the extremum. Towards Global Optimization, 2(117-129):2, 1978. Narayanan, Arvind and Shmatikov, Vitaly. Robust deanonymization of large sparse datasets. In IEEE Symposium on Security and Privacy, pp. 111 125. IEEE, 2008. Rasmussen, Carl Edward and Williams, Christopher K. I. Gaussian processes for machine learning. 2006. Sch olkopf, Bernhard and Smola, Alexander J. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2001. Shalev-Shwartz, Shai. Online learning: Theory, algorithms, and applications. 2007. Smith, Adam and Thakurta, Abhradeep Guha. Differentially private feature selection via stability arguments, and the robustness of the lasso. In COLT, pp. 819 850, 2013a. Smith, Adam and Thakurta, Abhradeep Guha. (nearly) optimal algorithms for private online learning in fullinformation and bandit settings. In NIPS, pp. 2733 2741, 2013b. Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning algorithms. In NIPS, pp. 2951 2959, 2012. Song, Shuang, Chaudhuri, Kamalika, and Sarwate, Anand D. Stochastic gradient descent with differentially private updates. In IEEE Global Conference on Signal and Information Processing, 2013. Srinivas, Niranjan, Krause, Andreas, Kakade, Sham M, and Seeger, Matthias. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML, 2010. Sweeney, Latanya. Weaving technology and policy together to maintain confidentiality. The Journal of Law, Medicine & Ethics, 25(2-3):98 110, 1997. Swersky, Kevin, Snoek, Jasper, and Adams, Ryan P. Multitask bayesian optimization. In NIPS, pp. 2004 2012, 2013. Vazquez, Emmanuel and Bect, Julien. Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. Journal of Statistical Planning and Inference, 140(11):3088 3095, 2010. Weinberger, Kilian, Dasgupta, Anirban, Langford, John, Smola, Alex, and Attenberg, Josh. Feature hashing for large scale multitask learning. In ICML, pp. 1113 1120. ACM, 2009. Yu, Shipeng, Esbroeck, Alexander van, Farooq, Faisal, Fung, Glenn, Anand, Vikram, and Krishnapuram, Balaji. Predicting readmission risk with institution specific prediction models. In IEEE International Conference on Healthcare Informatics (ICHI), pp. 415 420. IEEE, 2013.