# decisionfocused_learning_with_directional_gradients__20680dbd.pdf Decision-Focused Learning with Directional Gradients Vishal Gupta USC Marshall School of Business Los Angeles, CA 90029 guptavis@usc.edu Michael Huang CUNY Baruch Zicklin School of Business New York, NY 10010 michael.huang@baruch.cuny.edu We propose a novel family of decision-aware surrogate losses, called Perturbation Gradient (PG) losses, for the predict-then-optimize framework. The key idea is to connect the expected downstream decision loss with the directional derivative of a particular plug-in objective, and then approximate this derivative using zeroth order gradient techniques. Unlike the original decision loss which is typically piecewise constant and discontinuous, our new PG losses is a Lipschitz continuous, difference of concave functions that can be optimized using off-the-shelf gradient-based methods. Most importantly, unlike existing surrogate losses, the approximation error of our PG losses vanishes as the number of samples grows. Hence, optimizing our surrogate loss yields a best-in-class policy asymptotically, even in misspecified settings. This is the first such result in misspecified settings, and we provide numerical evidence confirming our PG losses substantively outperform existing proposals when the underlying model is misspecified. 1 Introduction We study the contextual optimization problem π (X) arg min z Z f (X) z, f (X) E [Y | X] , (1) where (X, Y ) X Y are random variables, and Z Rd is a known, potentially non-convex feasible region. We work in a data-driven setting in which f is unknown, but we observe i.i.d. draws {(Xi, Yi) : i = 1, . . . , n} of (X, Y ). Problem (1) models applications in which we observe a potentially informative context X before selecting the decision π(X) such as vehicle routing, portfolio allocation, and inventory management [7, 5, 32]. Problem (1) has also been used as an optimization layer" in neural network architectures to model combinatorial decisions [26]. Through a suitable transformation, it can also represent some, but not all, nonlinear problems like personalized pricing (see Appendix A). The predict-then-optimize framework focuses on plug-in policies for Problem (1). Given a function f : X 7 Y, the corresponding plug-in policy is ˆπ(f(X)) arg min z Z f(X) z, (2) with ties broken by some pre-specified tie-breaking rule. Plug-in policies are attractive because they separate the prediction procedure (f) from the optimization procedure (Problem (2)). This decoupling is especially useful when i) decisions z must satisfy hard constraints (enforced by Z), or ii) one has a specialized algorithm for solving instances of Problem (2) (e.g., a custom vehicle-routing solver). Given the form of π , a natural approach might be to learn an estimate ˆf of f from the data, e.g., by minimizing the mean-squared error, and then compute ˆπ( ˆf(X)). Such procedures are called decision-blind since they do not leverage Problem (1) when learning ˆf. 38th Conference on Neural Information Processing Systems (Neur IPS 2024). 50 100 150 200 Training Size (n) Normalized Excess Regret (%) DBB ETO FYL PGB SPO+ Figure 1: (Convergence under Misspecification). Excess regret normalized by optimal policy s performance under the misspecified setting of Section 4.1 (α = 1, m = 0). PGB is our proposed loss. ETO is a decisionblind approach that minimizes MSE. Other benchmarks include: DBB [26], FYL [1], and SPO+ [7]. Under misspecification, only the PG losses have vanishing excess regret. Error bars are 95% confidence intervals on the mean over 100 trials. The seminal paper Elmachtoub and Grigas [7] argues decision-aware techniques can be superior to decision-blind ones. Given a hypothesis class F YX , they propose solving minf F Regret(f) where Regret(f) E Y ˆπ(f(X)) E Y ˆπ(f (X)) . This is equivalent to solving minf F E [ℓ(f(X), Y )] where ℓ(t, y) y ˆπ(t). (3) Growing empirical evidence supports the strength of decision-aware approaches [31, 27]. A challenge is that when Z is polyhedral or combinatorial, t 7 ℓ(t, y) is a piecewise constant, discontinuous map. Its gradient is either zero or undefined at all points. Hence, one cannot easily apply a first-order method like stochastic gradient descent (SGD) to optimize Problem (3). In this paper we propose a new family of surrogate losses to approximate ℓ(t, y) by connecting ℓ(t, y) to the directional derivative of a particular plug-in function and using zeroth order gradients to approximate this derivative. We call this family perturbation gradient (PG) losses. PG losses are Lipschitz continuous, general purpose, and only require a black-box oracle which solves Problem (2). Most importantly, their gradients are informative (c.f. Lemma 2.2); after replacing ℓwith a PG loss, one can apply a first order method to Problem (3) or its empirical counterpart out-of-the-box." Previous authors have also proposed surrogates which satisfy some of these properties (see Section 1.2). What distinguishes our work is that under mild assumptions on the distribution of (X, Y ), the error of our surrogate in approximating ℓ(t, y) vanishes as n with a rate that depends on the complexity of F. More precisely, we prove that, for general Z, optimizing the empirical PGB loss (a particular member of the PG family) induces an excess regret over the best-in-class policy of at most Op( Rn + n 1/2) where Rn is the multivariate Rademacher complexity of F (Theorem 3.8). For linear hypotheses with dim(X) = p, this bound reduces to Op((dp/n)1/4). When Z is polyhedral, optimizing empirical PGB loss induces an excess regret of at most Op(n 1/2p ν log |Z |) , where ν is VC linear subgraph dimension of F and Z are the extreme points of Z (Theorem 3.8). Both bounds vanish as n , implying that optimizing our PGB loss yields a best-in-class policy asymptotically. Critically and this is the most distinctive feature of our work our results hold even when f F (misspecified setting). To our knowledge, these are the first result of their kind for a differentiable surrogate. Existing results on the predict-then-optimize framework [19, 14, 8]) require f F (the well-specified setting) and somewhat restrictive assumptions on the noise Y f (X) (see Section 1.2). These requirements are not simply a weakness in prior analysis. As seen in Fig. 1, existing methods can have very poor performance under misspecification. The key issue is that the justification for many of these losses tacitly relies on the fact that an optimal f should be such that f(X) Y almost surely, but under misspecification, this is generally impossible. Hence, they do not well-approximate the decision loss ℓ. See Fig. 2. This poor performance is especially unfortunate, because misspecified settings are precisely those where decision-aware learning offers the most benefit over decision-blind approaches [8, 3]. This is for at least three reasons: First, because the solution mapping ˆπ( ) is piecewise constant, there may exist f = f such that ˆπ(f(X)) = ˆπ(f (X)) almost surely [7, 34]. (Indeed, this appears to occur in Fig. 1.) Hence, one might still achieve (near) zero regret by learning over a low-complexity F in a decision-aware fashion, and, typically low-complexity hypothesis classes F are preferred for tractability, interpretability, and strong generalization properties. Second, when every f must induce some error, decision-aware learning seeks an f( ) such that ˆπ(f(X)) disagrees with ˆπ(f (X)) on regions of the covariate space X that are not too costly in the decision-problem, while decision-blind methods typically seek an f such that f(X) disagrees with f (X) on less probable regions of X 0.0 2.5 5.0 7.5 10.0 Policy Parameter Scaled Loss DL DBB FYL LTR-L PGB SPO+ n = 30 n = 3000 0.0 2.5 5.0 7.5 10.0 Policy Parameter Scaled Loss PGB SPO+ DL Figure 2: (Comparing Surrogates under Misspecification). See Section 4.1 for setup (α = 1, m = 0). Benchmarks are decision-loss (DL) ℓ, our PGB and PGC losses, Fenchel-Young Loss (FYL) [1], SPO+ [7], and the learning-to-rank list loss ([21]. Left-panel: (n = 200) Only our PG losses closely track the DL. Right Panels: As n increases, the DL and PG losses both become smoother. [3]. Finally, [8, 14] suggest traditional decision-blind learning strictly dominates decision-aware techniques in a well-specified setting (see Section 1.2), i.e, decision-aware learning loses most of its advantages if f F. To be fair, the improved approximation quality of our PG losses comes at the cost of computational complexity. Many existing surrogates are convex if F is a linear class. We can optimize these surrogates over F in polynomial time. On the other hand, Elmachtoub and Grigas [7] shows that solving the empirical counterpart of Problem (3) is NP-Hard (reduction from 0/1 classification). Thus, these aforementioned surrogates cannot be expected to reliably find best-in-class policies without additional assumptions on the data distribution unless P = NP. By contrast, our proposed PG losses are non-convex, expressible as the difference of concave functions. Optimizing such functions is well-studied [25, 30, 18], but is, in the worst-case, NP-Hard. This is to be expected; if we seek a method that finds a best-in-class policy, it must contend with this hardness. Importantly, some NP-Hard problems admit algorithms that find high-quality solutions efficiently for most real-world instances. We argue our loss (combined with simple gradient descent type methods) yields such a problem. Previous authors [26, 34] have also proposed non-convex surrogates and shown that first-order methods still recover high-quality solutions. Finally, we offer that convexity is often moot in applications. When using a nonlinear hypothesis class (e.g., a neural network with more than 1 layer), even convex surrogates induce non-convex loss functions. Optimizing these losses is theoretically no easier than optimizing our surrogate. In summary, our PG losses represent a practically implementable and (the first) theoretically justified approach to decision-aware learning in misspecified setting. 1.1 Contributions We propose a new family of surrogate losses called Perturbation Gradient (PG) losses for the predict-then-optimize approach to Problem (1). Our surrogates are Lipschitz continuous and can be expressed as the difference of concave functions. We show that the gradient of a PG loss evaluated at a sample point is an unbiased estimate of the gradient of the expected loss (Lemma 2.2). Thus, unlike the decision loss ℓ, we can apply first-order methods to minimize our expected surrogate or its ERM counterpart. We bound the uniform approximation error of our surrogates with respect to decision loss by a term vanishing in n (Theorems 3.4 and 3.7). Thus, with more data, our loss becomes more accurate. We prove that the empirical minimizer of our PGB loss yields a best-in-class policy asymptotically, even if the underlying hypothesis class is misspecified (Theorem 3.8). To our knowledge, ours is the first surrogate for the predict-then-optimize framework with such a performance guarantee. We provide numerical evidence showing that minimizing our surrogate loss performs comparably to other surrogates when the hypothesis class is well-specified, and substantively outperforms them when the hypothesis class is misspecified. 1.2 Related Work Elmachtoub and Grigas [7] first proposed a convex, differentiable surrogate loss for Problem (3) called the SPO+ loss leveraging a duality argument. Subsequent researchers have proposed other approaches to surrogate creation including replacing the plug-in policy Problem (2) with a regularized counterpart [32], creating a response-surface [28, 10], training a neural network to approximate ℓ(t, y) non-parametrically [34], linearizing l(t, y) [26], and combining randomized-smoothing with conjugate duality [1]. A recent computational study [31] compares many of these approaches and found that SPO+ and the Fenchel-Young loss of [1] performed best or near-best on all benchmarks. Despite the empirical strengths of decision-aware methods, their theoretical justification is less clear. Few methods establish regret bounds. Wilder, Dilkina, and Tambe [32] and Berthet et al. [1] prove that gradients of particular surrogates can be evaluated easily, but do not prove a regret guarantee for the minimizer of those surrogates. On the other hand, El Balghiti et al. [6] and Hu, Kallus, and Mao [14] prove generalization guarantees relating E [ℓ(f(X), Y )] to its empirical counterpart; hence, if one finds an f F with small empirical loss, E [l(f(X), Y )] will also be small. But minimizing the empirical counterpart to Problem (3) is computationally challenging. Jeong et al. [16] proposes a symbolic reduction scheme for this task. However, the method only applies to linear f and does not scale to large n. Most importantly, it is not amenable to first-order methods, so cannot be easily integrated into neural architectures. The strongest known regret bounds are for the SPO+ loss in the well-specified setting (f F). When the conditional distribution of Y |X is centrally symmetric around its mean, Elmachtoub and Grigas [7] establish a Fisher-consistency result. Liu and Grigas [19] strengthen this result, establishing (under similar assumptions) that if the multivariate Rademacher complexity of F is O(n 1/2), then the empirical minimizer of the SPO+ loss has regret at most O(n 1/4). That said, such results are perhaps unsatisfying because decision-blind methods typically dominate decision-aware methods in well-specified settings. Hu, Kallus, and Mao [14] show that when f F, the regret of a decision-blind approach that minimizes MSE converges to zero faster than the empirical minimizer of Problem (3). Said differently, decision-aware methods likely offer the most benefit in misspecified settings. Hence, these settings are arguably the most interesting. Most closely related to our work are perturbation-based approaches for estimating out-of-sample performance. These works each use Danskin s theorem to debias a naive estimate of out-of-sample performance. Ito, Yabe, and Fujimaki [15] and Guo, Jordan, and Zhou [11] each establish asymptotic convergence of their estimators (without an explicit rate): Ito, Yabe, and Fujimaki [15] treats a non-contextual setting and focuses on the ERM estimator. Guo, Jordan, and Zhou [11] treats a causal inference setting. By contrast, Gupta, Huang, and Rusmevichientong [12, 13] establish a finite-sample regret guarantee, but in a small-data, large-scale data regime with nearly-Gaussian corruptions. In this paper, we focus on the traditional large-sample regime (n ) with contexts. Moreover, instead of debiasing, we perturbations to approximate a directional derivative which exactly represents our out-of-sample loss. 1.3 Notation and Preliminaries We write a b to mean that there exists a universal constant C such that a Cb. We denote the ℓ2 norm by . To simplify the presentation, we also make the following boundedness assumption: Assumption 1.1 (Boundedness). There exists B > 0 such that maxz Z z B, and Y 1, almost surely. 2 A New Family of Surrogate Losses Define the plug-in policy objective: V (t) = min z Z t z = t ˆπ(t). Evaluating V (t) only requires a black-box oracle for Problem (2). Since it is minimum of linear functions, V (t) is concave. Our first key observation is that by Danskin s Theorem [2, Prop B.22], λV (t + λy) |λ=0 = y ˆπ(t) = ℓ(t, y), (4) where the left side is a derivative if ˆπ(t) is unique and a subgradient otherwise. We can form a family of PG surrogates by considering different zeroth order approximations to the derivative on the left (see [20, 24] for more on zeroth order gradients). We focus on two specific zeroth order gradients: Backward Differencing (PGB): ˆℓb h(t, y) 1 h (V (t) V (t hy)) Central Differencing (PGC): ˆℓc h(t, y) 1 2h (V (t + hy) V (t hy)) , for some user-defined h > 0. Intuitively, as h 0, both ˆℓb h(t, y) and ˆℓc h(t, y) should better approximate ℓ(t, y). (We formalize the tradeoff in h below.) For intuition on the shape of PG losses, consider the special case where Z = [ 1, 1], and Y { 1, 1}. Then, ℓ(t, y) = sgn(ty), a step function. The PGB and PGC losses are both ramp losses in this case, where the width of the ramp is controlled by h. Other zeroth order gradient schemes are possible. For example, forward differencing yields the surrogate from [26], motivated from a different perspective. This alternate perspective sheds light on empirical performance. Indeed, our theoretical analysis suggests h should be small, tending to zero, while [26] advocates for large h. Our analysis also shows forward differencing suffers optimistic bias because it overestimates the derivative of a concave function. These features might explain the poor performance of [26] in [31] benchmarks. We explore some of these issues in Appendix B, but fully characterizing how the choice of zeroth order gradient affects surrogate quality is an open problem. 2.1 Properties of PG Losses Using the structure of Problem (1), we prove some key properties of our surrogates. Lemma 2.1 (Basic Properties). Suppose Assumption 1.1 holds. For any t, t Rd and y Y, the PG losses are a) Lipschitz, i.e., ˆℓb(t, y) ˆℓb(t , y) 2B h t t , and ˆℓc(t, y) ˆℓb(t , y) B b) Bounded, i.e., ˆℓb(t, y) B, and ˆℓc(t, y) B. c) Differentiable 1, i.e., tˆℓb(t, y) = 1 h(ˆπ(t) ˆπ(t hy)), and tˆℓc(t, y) = 1 2h(ˆπ(t+hy) ˆπ(t hy)). Finally, the backward difference upperbounds the true loss, i.e., ℓ(t, y) ˆℓb(t, y). A primary advantage of our PG losses over the original loss ℓis that gradients are informative." More precisely, because ℓis discontinuous, t E [ℓ(t, Y )] = E [ tℓ(t, Y )], and tℓ(t, Yj) is not an unbiased estimate of t E [ℓ(t, Y )]. Our surrogates do not have this problem. Lemma 2.2 (Informative Gradients). Suppose Assumption 1.1 holds. For all t and Y , t E[ˆℓb h(t, Y )] = E[ tˆℓb h(t, Y )]. Thus, tˆℓb h(t, Yj) is an unbiased estimate of t E h ˆℓb h(t, Y ) i . The same statements also hold ˆℓc h. Lemma 2.2 ensures that we can apply first order methods out-of-the-box to optimize our PG losses. 3 Performance Guarantees For brevity, we focus on the backward PG loss. Analogous results hold for the central PG loss. Key Intuition. The key challenge is bounding the error between our PGB loss ˆℓb h and the decision loss ℓ. For intuition, consider the expected error at a fixed f F, i.e., E ˆℓb h(T, Y ) ℓ(T, Y ) , where 1These expressions are gradients when ˆπ(t) and ˆπ(t y) are unique optimizers, and elements of the Clarke subdifferential otherwise. T = f(X). Define the auxiliary function H(λ) = E [V (T + λY )]. When ˆπ(T + λY ) is unique, Lemma E.1 justifies switching the derivative and expectation yielding H (λ) = E d dλV (T + λY ) = E Y ˆπ(T + λY ) , where the last equality is Danskin s theorem [2, Prop B.22]. Thus, E ˆℓb(T, Y ) ℓ(T, Y ) = 1 h(H(0) H( h)) H (0), i.e., the expected approximation error equals the error in estimating the derivative of H. If H is not sufficiently well-behaved, this error may not be small. Lemma E.2 proves that if H is β-smooth, i.e., H (λ) is β-Lipschitz, then this error is at most βh. Since H entails expectation, we intuit that it should be smooth if (T, Y ) has a nice" density, similar to the intuition behind randomized smoothing. To quantify what nice" might mean, write H (λ) H ( λ) = E Y ˆπ(T + λY ) E Y ˆπ(T + λY ) . Since (t, y) 7 Y ˆπ(T + λY ) is B-bounded by Lemma 2.1, the last difference is at most B TV ((Y, T + λY ), (Y, T + λ)), where TV ( , ) is the total variation distance between the two random vectors. Hence, a nice" density is any density such that distributions of (Y, T +λY ) and (Y, T + λY ) are close whenever λ and λ are close. We expect this generally occurs whenever (T, Y ) admit Lipschitz continuous densities, but can be shown to fail if, e.g., T is concentrated at a single point. We make the above intuition formal in the next section. 3.1 Expected Approximation Error We make the following assumption: Assumption 3.1 (Lipschitz Log Conditional Density). Let g( ; f, Y ) be the conditional density of f(X) | Y . We assume that there exists a constant L > 0 such that log g( ; f, Y ) is L-Lipschitz for all f F and all Y almost surely. As discussed above, Assumption 3.1 is sufficient to ensure the requisite TV distance is small, but not necessary. We prefer Assumption 3.1 as it facilitates a short proof. Under this assumption, we have: Lemma 3.2 (Expected Approximation Error). Suppose Assumptions 1.1 and 3.1 hold and h < 1 L. Then, for any f F, 0 E[ˆℓb h(f(X), Y ) ℓ(f(X), Y )] (e 1)B L h. 3.2 Uniform Error Bounds Combining Lemma 3.2 and Hoeffding s inequality, yields a pointwise bound: Corollary 3.3 (Pointwise Approximation Error). Fix some f F. Suppose Assumptions 1.1 and 3.1 hold and h < 1 L. Then, for any 0 < δ < 1 2, with probability at least 1 δ, 1 n Pn j=1 ˆℓb h(f(Xj), Yj) E [ℓ(f(X), Y )] BLh + B p log(1/δ)/n. As seen in Lemma 2.1, the Lipschitz constant of ˆℓb h scales like 1/h. Hence, unlike other learning methods, h does not control a bias-variance tradeoff; rather h controls a bias-computational complexity tradeoff. Practically, we suggest taking h as large as the next largest term in the bound, i.e. h = O(n 1/2) above, to maximize the smoothness without compromising the rate. Corollary 3.3 captures the key ideas of our approach, but is insufficient to establish a regret guarantee; we need a uniform error bound. To that end, we prove two results: Our first generalization bound applies to any choice of Z. We leverage the Lipschitzness of ˆℓb h (Lemma 2.1a) to apply a vector contraction inequality from Maurer [22] and bound the Rademacher complexity of our sample surrogate loss. A similar strategy is used in [19]. More specifically, define the multivariate Rademacher complexity Rn (F) = E h ˆRn (F) i = E h supf F 1 n Pn i=1 σ i f(Xi) i , (5) where σi = (σi1, . . . , σid) and σij are i.i.d. Rademacher random variables. Then, we have Theorem 3.4 (Uniform Error Bound for General Z). Suppose Assumptions 1.1 and 3.1 hold. For any 0 < δ < 1 2 and 0 < h < 1 L, with probability at least 1 δ n Pn i=1 ˆℓb h (f(Xi), Yi) E [ℓ(f(X), Y )] BLh + B2 h Rn(F) + B p log(1/δ)/n. If dim(X) = p and F is a linear class, Rn(F) = O( p dp/n) [6]. Choosing h = O((dp/n)1/4) yields an error of size Op((dp/n)1/4). This is same rate as Liu and Grigas [19], but also holds in the misspecified setting where f / F. Theorem 3.4 applies to general Z, but may be loose. We next present a stronger result when Z is polyhedral by leveraging results from Hu, Kallus, and Mao [14] based on VC dimension: Definition 3.5 (VC-Linear-Subgraph Dimension). The VC-linear-subgraph dimension of a class of functions F YX , is the VC dimension of the sets F = (x, β, t) : β f(x) t : f F in X Rd+1, that is, the largest integer ν for which there exist x1, . . . , xν X, β1, . . . , βν Rd, t1 R, ..., tν R such that I β j f (xj) tj : j = 1, . . . , ν : f F = 2ν. We make the following assumption: Assumption 3.6 (Bounded VC Dimension). The VC-linear-subgraph dimension of the class F = f : f(x, y) = f(x) + hy, for f F, h R is at most ν. We obtain the following bound for polyhedral Z, where Z is the set of extreme points of Z. Theorem 3.7 (Uniform Error Bound for Polyhedral Z). Suppose Assumptions 1.1, 3.1 and 3.6 hold. For any 0 < δ < 1 2 and 0 < h < 1 L, with probability at least 1 δ, n Pn i=1 ˆℓb h (f(Xi), Yi) E [ℓ(f(Xi), Yi)] BLh + B q ν log(|Z |+1) log(1/δ) Choosing h = O(n 1/2) yields a bound of size Op(n 1/2) which matches the generalization error of ℓfrom [14, 6]. Thus, for polyhedral Z, our surrogate converges no slower than the empirical loss, but is more computationally tractable. 3.3 Excess Regret Bounds We next transform the uniform bounds of the previous section to bounds on excess regret. Define ERegret(f) E Y ˆπ(f(X)) E Y ˆπ(f OR(X)) , where f OR argminf F Regret(f). Excess regret measures regret relative to the best-in-class policy f OR, not the full-information optimum f . For a fixed h < 1 L, define the empirical minimizer of PGB loss ˆfh argminf F 1 n Pn i=1 ˆℓb h (f(Xi), Yi) . Then, we have the following: Theorem 3.8 (Excess Regret Bounds). i) Suppose the assumptions of Theorem 3.4 hold. Then, ERegret( ˆfh) p B3LRn(F) + B n. ii) Suppose the assumptions from Theorem 3.7 hold. Then, ERegret( ˆfh) B q ν log(|Z |+1) For many hypothesis classes, the multivariate Rademacher complexity is vanishing in n. Hence, both bounds are vanishing in n and ˆfh achieves best-in-class performance asymptotically. 4 Numerical Experiments We compare learning a linear hypothesis class with our PG losses (PGB and PGC) to surrogates currently implemented in the Py EPO Python package [31]. Specifically, we benchmark against: SPO+ [7], DBB [26], FYL [1], and the family of LTR losses [21]. Additionally, we also benchmark against a decision-blind" two stage policy that first minimizes the least-squares loss and then implements the corresponding plug-in policy (ETO). We optimize each surrogate using ADAM via the Py EPO framework. All methods are trained for a total of 100 epochs, and we select the best model found in those 100 epochs based on validation set of size 200. For PG losses, we initialized at the SPO+ solution and choose h from a small grid of values based on validation set performance. Future computational experiments might explore the effect of alternate initializations. We do not add additional regularization or smoothing to any of the surrogates. See Appendix C for other implementation details. Our metric of interest is the normalized excess regret (E Y (π (X) ˆπ(X)) /E Y π (X) ), where we have normalized by the optimum policy (c.f. Problem (1)) for interpretability. 4.1 Simple Misspecification Experiment In our first experiment, we let Z {0, 1}. We let X Unif(0, 2) and f (x) = 4x + 2, for x [0, 0.55) m(x 0.55) 0.2, for x [0.55, 2] The function is piecewise linear with one piece that has a slope of 4 and another piece with a slope of m [0, 4] (an elbow). The change point is at x = 0.55 where the two functions meet at 0.2 (see Fig. 7 in Appendix D). Intuitively, m controls the degree of misspecification; at m = 4, f F and the problem is well-specified. At m = 0, the problem is maximally misspecified. We generate synthetic data as Y = f (X) + ϵα. We define ϵα = α (ζ 0.5) + 1 α γ where α [0, 1], ζ is an exponential random variable with mean 0.5, and γ N(0, 0.25). By construction ϵ is mean-zero noise with variance 0.25. The value of α = 0, ϵ controls how asymmetric the noise is. Note, when α = 0, the theoretical performance guarantees on SPO+ from [19] do not apply. Results. Figure 1 plots the relative regret for m = 0 and α = 1, that is, the most misspecified setting with the most asymmetric noise ϵ. Beyond highlighting the superior performance of the PG losses in misspecified settings, Fig. 1 also shows the choice of finite difference approximation (backward or central) also impacts performance. Intuitively, central differencing likely outperforms backward differencing because in standard, deterministic settings, central finite differencing has error O(h2) relative to the true derivative, while backward differencing has error O(h) [17]. This intuition can be made formal in our setting by adapting Lemma 3.2, but we omit the details for brevity. -4 -3 -2 -1 0 Level of Misspecification (m) Normalized Excess Regret 0.00 0.25 0.50 0.75 1.00 Level of Noise Asymmetry (α) Normalized Excess Regret Figure 3: (SPO+ Comparisons) The left figure plots the excess regret normalized by the optimal policy s performance as we vary m for n = 80 and α = 1. The right figure plots the same value as we vary α for n = 200. When α = 0 the noise is centrally symmetric and when α = 1 the noise is the most asymmetric. Error bars are 95% confidence intervals on the mean over 100 trials. The left panel of Fig. 3 in Appendix D studies the effect of increasing degrees of misspecification. We limit the benchmarks to ETO and SPO+ as other methods are qualitatively similar. We find that (as argued in the introduction), in well-specified settings (m = 4), the benefits of decision-aware learning may be small. All methods (including decision-blind ETO) achieve low regret, even for small n. In our experiments, even for n = 20 the relative regret was less than 0.6% across all methods. On the other hand, as the degree of misspecification grows, decision-aware methods have an advantage. However, we see that SPO+ is nearly as susceptible as to misspecification as decision-blind approaches since the relative regret also increases rapidly. By contrast, the relative regret for our PG losses increases more slowly. We stress, this experiment fixes n. As n , our theory shows the regret of the PG losses tends to best-in-class as in Fig. 1. The right panel of Fig. 3 studies how the noise distribution impacts the relative regret since all prior known performance guarantees for SPO+ require strong assumptions on the noise [7, 19]. The plot suggests that requiring a symmetric noise is not simply a weakness in the analysis of SPO+, but fundamental to the method. As the noise becomes less symmetric, the performance of SPO+ degrades. Even when the assumption is satisfied (α = 0), we see SPO+ is still significantly impacted by misspecification. By contrast, the PG losses perform similarly as the shape of the noise varies. 4.2 Shortest Path Experiments Random Arc Costs. We first replicate the shortest path experiment from [7, 31] on a 5 5 grid graph. We let X N(0, I5) and for each edge j, and take f j (x) = 1 3.56 5(B x)j + 3 6 + 1 where B {0, 1}40 5 has i.i.d. Bernoulli(0.5) entries (drawn once and fixed throughout). We consider two different data generation mechanisms: i) Multiplicative noise, i.e., Yj = f j (X)(1 + ϵj) where ϵj are i.i.d Unif[ .3, .3]. This choice closely mirrors the original experiment of [7]. ii) Additive Gaussian noise, i.e., Yj = f j (X) + εj where εj N(0, 0.32). Figure 8 in the Appendix D compares the PG losses to the best two surrogates in our experiments, FYL [1] and SPO+ [7]. Here PGF represents a zeroth order gradient using forward differencing and is equivalent to the method of [26] but with a small h as opposed to a large h. Despite the non-convexity, minimizing our PG losses with first order methods yields comparable performance to FYL and SPO+ (convex methods). In other words, they do not seem to get stuck in local minimima. For small n, we do seem some distinction, which is likely because our losses are less smooth (see the right figure of Fig. 2). Harder Example with Planted Arcs. Because arc costs are completely at random in the previous example, there are likely many paths with near-optimal length. We next consider a harder instance where we hide a unique good path. (a) Safe and Risky Path Gaussian, Noise-Halfwidth = 0.3 Uniform, Noise-Halfwidth = 0.3 200 400 800 1600 200 400 800 1600 0 Training Size (n) Normalized Excess Regret (%) 2-Stage LR PGB PGC SPO+ FYL (b) Performance Figure 4: Harder Shortest Path. a) One of the two planted paths will be optimal depending on value of X6. All other arcs strictly worse. b) Normalized Excess Regret as we vary the training samples. Error bars are 95% confidence intervals on the mean over 100 trials. Specifically, we now take X R6 where X1:5 N(0, I5) and X6 Unif[0, 2]. In Fig. 4a, we have a safe (red) path and a risky (blue) path. For red arcs, f j (x) = 2 for all x. For the blue arcs (risky 200 400 800 1600 Training Size (n) Normalized Expected Regret 2-Stage LR FYL SPO+ PGB PGC Figure 5: (Portfolio Optimization) We plot the excess regret normalized by optimal policy s performance as we vary the number of training samples. Error bars are 95% confidence intervals on the mean over 100 trials. path), f j (x) = 4x6 if 0 x6 0.55 and f j (x6) = 2.2 otherwise. For all other arcs, we take f j (x) = 1 3.56 5(B x)j + 3 6 + 1 which is similar to previous experiment but shifted up by 2.2. Consequently, either the red path or the blue path is optimal, depending on the value of X6. The observed Y values are generated as before by adding either multiplicative uniform or additive Gaussian noise. A good method thus must first identify these two paths as the best options (despite the noise) and choose between them (by learning the relationship to X6). In this harder setting, PG losses offer a significant benefit. Figure 9 in Appendix D shows this performance is relatively robust to the choice of h. 4.3 Portfolio Experiment We study the same portfolio optimization problem as [7, 28, 34] but use real data, specifically the 12 Fama French Industry Sector Portfolios from the Fama French Library [9]. These portfolios are indices representing monthly returns of different asset classes and realistically mirror the asset allocation problem faced by wealth managers. We sample a month t at random from the last 10 years, and let Y = rt be the return of the d = 12 indices, and let X = rt 1 + N(0, 0.5Σ) (p = 12) where Σ is the covariance of rt over those 10 years. The additional noise lowers the signal-to-noise ratio while maintaining the correlation matrix of X and makes the problem harder. As one can see in Fig. 5, because of the low signal-to-noise ratio, all methods induce significant regret to the optimum, but both PGB and PGC are notably stronger. 5 Conclusion In this paper we proposed a novel family of surrogate losses for the predict-then-optimize framework that can be optimized using off-the-shelf gradient methods. Most importantly, the approximation error of these surrogates vanishes as n . Hence, optimizing our surrogate yields a best-in-class policy asymptotically, even in misspecified settings. Our PG losses are the first proposed surrogates with this property and substantively outperform other methods in misspecified settings. The family of PG losses arises from different approaches to approximating a derivative. As mentioned, an interesting open question is identifying the best-possible choice of approximation. We also believe that better understanding the role of h in trading off between bias and computational complexity might shed light on improve algorithms and tuning procedures. Acknowledgments and Disclosure of Funding The authors have no competing interests to disclose. The authors would like to thank Hamsa Bastani, Osbert Bastani, Adam Elmachtoub, Paul Grigas, Ziyu He, and Angela Zhou for feedback on an initial draft of this manuscript. VG was partially funded by the Institute for Outlier Research in Business. [1] Quentin Berthet et al. Learning with differentiable pertubed optimizers . In: Advances in Neural Information Processing Systems 33 (2020), pp. 9508 9519. [2] Dimitri P. Bertsekas. Nonlinear Programming. 2nd. Belmont, MA: Athena Scientific, 1999. [3] Tsai-Hsuan Chung et al. Decision-Aware Learning for Optimizing Health Supply Chains . In: ar Xiv preprint ar Xiv:2211.08507 (2022). [4] Juncheng Dong et al. PASTA: pessimistic assortment optimization . In: International Conference on Machine Learning. PMLR. 2023, pp. 8276 8295. [5] Priya Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimization . In: Advances in Neural Information Processing Systems 30 (2017). [6] Othman El Balghiti et al. Generalization Bounds in the Predict-Then-Optimize Framework . In: Mathematics of Operations Research (2022). [7] Adam N Elmachtoub and Paul Grigas. Smart Predict, then Optimize . In: Management Science 68.1 (2022), pp. 9 26. [8] Adam N. Elmachtoub et al. Estimate-Then-Optimize versus Integrated-Estimation Optimization versus Sample Average Approximation: A Stochastic Dominance Perspective. 2023. ar Xiv: 2304.06833 [stat.ML]. [9] Kenneth French. Kenneth R. French - Data Library. Dataset: 12 Industry Portfolios, accessed on August 4th, 2024. 2024. URL: https://mba.tuck.dartmouth.edu/pages/faculty/ ken.french/data_library.html. [10] Paul Grigas, Meng Qi, et al. Integrated conditional estimation-optimization . In: ar Xiv preprint ar Xiv:2110.12351 (2021). [11] Wenshuo Guo, Michael Jordan, and Angela Zhou. Off-policy evaluation with policydependent optimization response . In: Advances in Neural Information Processing Systems 35 (2022), pp. 37081 37094. [12] Vishal Gupta, Michael Huang, and Paat Rusmevichientong. Debiasing in-sample policy performance for small-data, large-scale optimization . In: Operations Research (2022). Forthcoming. [13] Vishal Gupta, Michael Huang, and Paat Rusmevichientong. Decision-Aware Denoising. https: //ssrn.com/abstract=. Available at SSRN. 2024. [14] Yichun Hu, Nathan Kallus, and Xiaojie Mao. Fast rates for contextual linear optimization . In: Management Science 68.6 (2022), pp. 4236 4245. [15] Shinji Ito, Akihiro Yabe, and Ryohei Fujimaki. Unbiased objective estimation in predictive optimization . In: International Conference on Machine Learning. PMLR. 2018, pp. 2176 2185. [16] Jihwan Jeong et al. An Exact Symbolic Reduction of Linear Smart Predict+Optimize to Mixed Integer Linear Programming . In: Proceedings of the 39th International Conference on Machine Learning. Ed. by Kamalika Chaudhuri et al. Vol. 162. Proceedings of Machine Learning Research. PMLR, 2022, pp. 10053 10067. URL: https://proceedings.mlr. press/v162/jeong22a.html. [17] Randall J Le Veque. Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems. SIAM, 2007. [18] Thomas Lipp and Stephen Boyd. Variations and extension of the convex concave procedure . In: Optimization and Engineering 17 (2016), pp. 263 287. [19] Heyuan Liu and Paul Grigas. Risk bounds and calibration for a smart predict-then-optimize method . In: Advances in Neural Information Processing Systems 34 (2021), pp. 22083 22094. [20] Sijia Liu et al. A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning . In: Co RR abs/2006.06224 (2020). ar Xiv: 2006.06224. URL: https://arxiv. org/abs/2006.06224. [21] Jayanta Mandi et al. Predict and Optimize: Through the Lens of Learning to Rank . In: Co RR abs/2112.03609 (2021). ar Xiv: 2112.03609. URL: https://arxiv.org/abs/2112.03609. [22] Andreas Maurer. A vector-contraction inequality for rademacher complexities . In: Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27. Springer. 2016, pp. 3 17. [23] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT press, 2018. [24] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions . In: Foundations of Computational Mathematics 17.2 (2017), pp. 527 566. [25] Bilal Piot, Matthieu Geist, and Olivier Pietquin. Difference of Convex Functions Programming for Reinforcement Learning . In: Advances in Neural Information Processing Systems. Ed. by Z. Ghahramani et al. Vol. 27. Curran Associates, Inc., 2014. URL: https : / / proceedings . neurips . cc / paper _ files / paper / 2014 / file / 50905d7b2216bfeccb5b41016357176b-Paper.pdf. [26] Marin Vlastelica Poganˇci c et al. Differentiation of blackbox combinatorial solvers . In: International Conference on Learning Representations. 2019. [27] Utsav Sadana et al. A Survey of Contextual Optimization Methods for Decision Making under Uncertainty. 2023. ar Xiv: 2306.10374 [math.OC]. [28] Sanket Shah et al. Decision-Focused Learning without Differentiable Optimization: Learning Locally Optimized Decision Losses. 2022. ar Xiv: 2203.16067 [cs.LG]. [29] Shai Shalev-Shwartz et al. Learnability, stability and uniform convergence . In: The Journal of Machine Learning Research 11 (2010), pp. 2635 2670. [30] Xinyue Shen et al. Disciplined convex-concave programming . In: 2016 IEEE 55th conference on decision and control (CDC). IEEE. 2016, pp. 1009 1014. [31] Bo Tang and Elias B Khalil. Py EPO: A Py Torch-based End-to-End Predict-then-Optimize Library for Linear and Integer Programming . In: ar Xiv preprint ar Xiv:2206.14234 (2022). [32] Bryan Wilder, Bistra Dilkina, and Milind Tambe. Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization . In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 1658 1665. [33] Yibo Zeng and Henry Lam. Generalization bounds with minimal dependency on hypothesis class via distributionally robust optimization . In: Advances in Neural Information Processing Systems 35 (2022), pp. 27576 27590. [34] Arman Zharmagambetov et al. Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information . In: ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators. 2023. URL: https://openreview.net/forum?id=q Gzl OO27Cn. -1.0 -0.5 0.0 0.5 1.0 Figure 6: (Comparing Zeroth Order Gradients). PGC, PGB, and PGF all approximate the decision loss (DL), but PGB is a pessimistic bound, while PGF is an optimistic bound. Here the optimism causes PGF to choose the wrong policy. Appendix / Supplemental Material A Reformulating Nonlinear Problems Through an appropriate transfomation of variables, some nonlinear optimization problems can be rewritten in the form Problem (1), and, thus, are amenable to our approach. Consider the problem π (X) argmin z Z f (X) g(z), where f (X) = E [Y | X] and g( ) is a fixed, known, vector-valued function. This problem is equivalent to the problem min z f (X) z s.t. z Z {g(z) : z Z}, which is of the requisite form for our analysis. Moreover, our algorithms only require access to an oracle which can compute π(f(X)) argmin z Z f(X) z for any f. Often, this is accomplished by solving π(f(X)) argminz Z f(X) z and then returning g(π(f(X))). Gupta, Huang, and Rusmevichientong [12] use this reduction to model a personalized pricing problem (see Example 2.3 of their paper). B Comparing Zeroth Order Gradient Schemes In this section we provide a brief comparison of the forward differencing scheme to backwards and central differencing. The key distinction is that since V ( ) is concave, forward differencing creates a surrogate that optimistically underestimates the true loss (forward differences underestimate the derivative of concave functions) whereas backward differencing pessimistically overestimates the true loss. Some authors [4, 33] have shown that pessimism can improve learning, and we observe a similar phenomenon. Figure 6 provides an illustration. We consider the same misspecified data setup as Section 4.1 (α = 1, m = 0) and take n = 200. We plot the decision loss (DL) ℓ, and our PGB, PGF, and PGC losses, for the plug-in class F = { 0.1x + B0 : B0 [ 1, 1]}. Because PGF optimisticaly underestimates loss, it suggests the policy β0 = .1, which actually induces significant regret. By contrast, backwards differencing is pessimistic and suggests the policy β0 = .98 which is essentially optimal. Central differencing is neither optimistic nor pessimistic, but still suggests a good policy β0 = .99. C Implementation Details For our numerical experiments we leverage the Py EPO framework which was developed using Py Torch. For our experiments, we utilize Adam with learning rate 0.01 to optimize the training losses. We run Adam over 100 epochs with a batch size of 32 for each surrogate loss. For non-PG loss surrogates we use the recommended parameters provided by Py EPO. For our PG losses, we tune h by validating with a hold out set of training 200 samples. We note that similar results were obtained by validating against the training decision loss. Additionally, we initialize the PG losses at the SPO+ solution. To compute the expected regret, we generated a test set of 10000 samples and use it to estimate the relative regret described in Section 4. Some of our experiments were run on a high performance computing cluster administred by the University of Southern California s Center for Advanced Research Computing (CARC). The cluster facilitated multiple simulation runs of the experiments. However, a significant portion of the experiments in the paper (that did not require multiple Monte Carlo runs) were run on a Macbook Pro with an Apple M3 Max Chip with 96 GB Memory. D Additional Figures Well-Specified (m = 4) Misspecified m = 0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 Figure 7: (Synthetic Data Generation from Section 4.1) Observations of (Xi, Yi) for m = 4 (left) and m = 0 (right). Red line is f (X) for each setting. Gaussian, Noise-Halfwidth = 0.3 Uniform, Noise-Halfwidth = 0.3 200 400 800 1600 200 400 800 1600 Training Size (n) Normalized Excess Regret (%) SPO+ PGF PGB FYL PGC Figure 8: (Shortest Path, Random Arc Costs) Excess regret normalized by optimal policy s performance as we vary the number of training samples. Error bars are 95% confidence intervals on the mean over 100 trials. E Omitted Proofs E.1 Proof for Lemma 2.1 Proof. We first prove (a), the Lipschitz property. We first claim V ( ) is B Lipschitz, since V (t) V (s) = t ˆπ(t) s ˆπ(s) = t (ˆπ(t) ˆπ(s)) | {z } 0, by optimality of ˆπ(t) + (t s) ˆπ(s) t s ˆπ(s) B t s , n ep_type h avg std 800 normal 0.001 0.050 0.008 800 normal 0.035 0.049 0.008 800 normal 0.188 0.048 0.008 800 normal 0.434 0.049 0.013 800 unif 0.001 0.074 0.009 800 unif 0.035 0.073 0.009 800 unif 0.188 0.072 0.010 800 unif 0.434 0.076 0.017 1600 normal 0.001 0.048 0.005 1600 normal 0.025 0.047 0.004 1600 normal 0.158 0.046 0.004 1600 normal 0.398 0.048 0.012 1600 unif 0.001 0.070 0.006 1600 unif 0.025 0.070 0.006 1600 unif 0.158 0.068 0.006 1600 unif 0.398 0.071 0.015 (a) Validation Performance n ep_type h avg std 800 normal 0.001 0.006 0.006 800 normal 0.035 0.005 0.006 800 normal 0.188 0.004 0.006 800 normal 0.434 0.005 0.011 800 unif 0.001 0.007 0.007 800 unif 0.035 0.006 0.007 800 unif 0.188 0.006 0.008 800 unif 0.434 0.009 0.015 1600 normal 0.001 0.003 0.002 1600 normal 0.025 0.003 0.002 1600 normal 0.158 0.002 0.002 1600 normal 0.398 0.003 0.010 1600 unif 0.001 0.004 0.003 1600 unif 0.025 0.004 0.003 1600 unif 0.158 0.002 0.003 1600 unif 0.398 0.005 0.011 (b) Normalized Excess Regret Figure 9: (Dependence on h, Planted Shortest Path Experiment.) We compare the performance of the policy learned by the PGB loss for different values of h across the 100 runs. a) Shows performance on Valiation set. For ease of comparison, we have scaled the validation performance and presented Pnval i=1 Y i (ˆπ(f(Xi)) ˆπ(f (Xi))) / Pnval i=1 Y i ˆπ(f (Xi)) . b) Shows performance out of sample. This out-of-sample performance is relatively flat in h, suggesting the precise choice of h does not matter much in this example. where the last inequality follows from Assumption 1.1. A symmetric argument holds for V (s) V (t) proving V is B Lipschitz. Returning to ˆℓb(t, y), write ˆℓb h(t, y) ˆℓb h(t , y) = V (t) V (t hy) h V (t ) V (t hy) |V (t) V (t )| h + |V (t hy) V (t hy)| An entirely analogous argument holds for ˆℓc h(t, y). We next prove (b), the boundededness property. Write ˆℓb h (t, y) = |V (t) V (t hy)| Again, an analogous argument holds for ˆℓc h(t, y). This completes the proof for (b) The proof of (c) follows directly from applying Danskin s Theorem [2, Prop B.22]. To prove (d), we see ˆℓb h (t, y) ℓ(t, y) = V (t + hy) V (t) = (t + hy) ˆπ(t + hy) (t + hy) ˆπ(t) where the last inequality holds by optimality of ˆπ(t + hy). Rearranging proves the result for (d). E.2 Proof of Lemma 2.2. Proof. We apply the dominated convergence theorem. Let ei Rd be the ith coordinate vector. Then, ti E h ˆℓb h(t, Y ) i = lim δ 0 E 1 δ (ˆℓb h(t + δ, Y ) ˆℓb h(t, Y )) (6) δ (ˆℓb h(t + δ, Y ) ˆℓb h(t, Y )). Then, by the Lipschitz property of Lemma 2.1, |Wδ| 2B h , and limδ 0 Wδ = ti ˆℓb h(t, Y ) almost surely. The result then holds for the ith partial derivative of ˆℓb h from the dominated convergence theorem. Since i was arbitrary, it holds for all i = 1, . . . , d, and thus holds for the gradient. An analogous proof holds for ˆℓc h. E.3 Auxiliary Lemmas from Section 3 Lemma E.1 (Interchange Derivative for H). Suppose Assumption 1.1 holds and that the optimizer ˆπ(T + λY ) is unique almost surely. Then H (λ) = E d dλV (T + λY ) . Proof. We use the bounded convergence theorem. Write H (λ) = lim δ 0 H(λ + δ) H(λ) = lim δ 0 E V (T + (λ + δ)Y ) V (T + λY ) Because V is B-Lipschitz, V (T +(λ+δ)Y ) V (T +λY ) δ Y 1. By the bounded convergence theorem we can interchange the limit and expectation yielding, H (λ) = E lim δ 0 V (T + (λ + δ)Y ) V (T + λY ) Since ˆπ(T + λY ) is unique, Danskin s theorem [2, Prop B.22] confirms V (T + λY ) is differentiable, and the above inner limit converges to the derivative d dλV (T + λY ). Lemma E.2 (Error of Backward Finite Difference). Suppose H is differentiable on [λ h, λ], and β-smooth. Then, H (λ) 1 h(H(λ) H(λ h)) βh. Proof. By the mean-value theorem, 1 h(H(λ) H(λ h)) = H (λ h) for some 0 h h. Thus, H (λ) 1 h(H(λ) H(λ h)) = H (λ) H (λ h) β h, by β-smoothness. Upper bounding h by h completes the proof. E.4 Proof for Lemma 3.2 Our first observation is that the error in our surrogate is bounded by the solution stability of the policy. A similar bound is used in Gupta, Huang, and Rusmevichientong [13] in a different context: Lemma E.3 (Solution Stability Bounds Error). For any t, y, h, 0 ˆℓb h(t, y) ℓ(t, y) y (ˆπ(t hy) ˆπ(t)) | {z } Solution Stability In words, solution stability measures how much the policy changes given small perturbation hy. Notions of stability appear throughout the machine learning literature and are fundamental to learnability [29]. Lemma E.3 relates the error of our surrogate to this fundamental quantity. We stress the relation holds for any t, h, y. Proof. The first inequality was proven in Lemma 2.1. For the second, note that V (t) = t ˆπ(t). Hence by rearranging, ˆℓb h(t, y) ℓ(t, y) = 1 h(V (t) V (t hy)) y ˆπ(t) h t (ˆπ(t) ˆπ(t hy) + y (ˆπ(t hy) ˆπ(t)) y (ˆπ(t hy) π(t)), by the optimality of ˆπ(t). To bound the expected approximation error in Lemma 3.2, we require the following elementary result: Lemma E.4 (Density Ratio Bound). Suppose Assumption 3.1 holds. Then, for any t, t such that t t 1/L, we have g(t ; f, Y ) g(t; f, Y ) 1 (e 1)L t t . Proof. Let g(t) g(t; f, Y ). By the convexity of the exponential, exp(x) 1 + (e 1)x 0 x 1, and exp(x) 1 + x x. (7) Let s(t) = log g(t). Then, = s(t ) s(t) L t t Taking the exponential of both sides and subtracting 1, we have g(t) 1 exp (L t t ) 1 (e 1)L t t , where the last inequality follows from Eq. (7) and our assumption that t t 1/L. Similarly, we have, g(t) 1 exp ( L t t ) 1 Hence, g(t ) g(t) 1 (e 1)L t t . This completes the proof. Proof of Lemma 3.2. Let T = f(X). Condition on Y and let g(t) g(t; f, Y ). Then, by Lemma E.3, we have 0 E h ˆℓb h(T, Y ) ℓ(T, Y ) Y i E Y (ˆπ(T h Y ) ˆπ(T)) Y . We bound this last quantity as follows: E Y (ˆπ(T h Y ) ˆπ(T)) Y (8) = Z g(t)Y ˆπ(t h Y )dt Z g(t)Y ˆπ(t)dt = Z Y ˆπ(t) (g(t + h Y ) g(t)) dt Z Y ˆπ(t) |g(t + h Y ) g(t)| dt B Z g(t) g(t + h Y ) (e 1)BL h Y Z g(t; f, Y )dt Taking the expectation over Y completes the proof. E.5 Proof for Theorem 3.4 Proof. We bound the uniform error as follows: i=1 ˆℓb h (f(Xi), Yi) E [ℓ(f(Xi), Yi)] i=1 ˆℓb h (f(Xi), Yi) E h ˆℓb h (f(Xi), Yi) i | {z } (i) i=1 E h ˆℓb h (f(Xi), Yi) ℓ(f(Xi), Yi) i | {z } (ii) We first bound (i). Let Rn SL(F) = E h ˆRn SL(F) i = E i=1 σiˆℓb h (f(Xi), Yi) By Lemma 2.1b, 0 ˆℓb h(f(Xi),Yi)+B 2B 1. Hence, we can apply a standard Rademacher complexity result [23, Theorem 3.3] to show for any 0 < δ < 1 2, with probability at least 1 δ, the following holds for all f F simultaneously: " ˆℓb h(f(Xi), Yi) + B ˆℓb h(f(Xi), Yi) + B 2B + 2Rn SL(F) + We can apply an identical argument to ˆℓb h(f(Xi),Yi)+B 2B to obtain a similar lower bound. Combining the two inequalities and taking the union bound, we have that with probability at least 1 2δ, the following holds for all f F simultaneously: 1 n i=1 ˆℓb h (f(Xi), Yi) E h ˆℓb h (f(Xi), Yi) i 4BRn SL(F) + 2B We next bound Rn SL(F) by applying Corollary 4 of Maurer [22] to show Rn SL(F) = E i=1 σiˆℓb h (f(Xi), Yi) i=1 σ i f(Xi) Here we have used the Lipschitz constant from Lemma 2.1a. Substituting this bound above and collecting constants shows that with probability at least 1 δ, h Rn(F) + B Finally, we use Lemma 3.2 to bound (ii). Combining proves the result. E.6 Proof for Theorem 3.7 Proof. We develop an alternative decomposition of the uniform error. Write 1 n i=1 ˆℓb h (f(Xi), Yi) E [ℓ(f(Xi), Yi)] i=1 ˆℓb h (f(Xi), Yi) ℓ(f(Xi), Yi) i=1 ℓ(f(Xi), Yi) E [ℓ(f(Xi), Yi)] Consider (i). We can write ˆℓb h (f(Xi), Yi) ℓ(f(Xi), Yi) i=1 Y i (ˆπ(f(Xi) h Yi) ˆπ(f(Xi))) i=1 Y i ˆπ(f(Xi) h Yi) E Y i ˆπ(f(Xi) h Yi) i=1 Y i ˆπ(f(Xi)) E Y i ˆπ(f(Xi)) i=1 E Y i (ˆπ(f(Xi) h Yi) ˆπ(f(Xi))) i=1 Y i ˆπ(f(Xi) h Yi) E Y i ˆπ(f(Xi) h Yi) i=1 E Y i (ˆπ(f(Xi) h Yi) ˆπ(f(Xi))) where the first inequality applies the triangle inequality, the second inequality applies Lemma E.3, and the last inequality combines similar terms by taking the supremum over h. Applying this bound in Eq. (9) shows i=1 ˆℓb h (f(Xi), Yi) E [ℓ(f(Xi), Yi)] i=1 Y i ˆπ( f(Xi, Yi)) E Y i ˆπ( f(Xi, Yi)) | {z } (a) i=1 E Y i (ˆπ(f(Xi) h Yi) ˆπ(f(Xi))) where we recall that F = f : f(x, y) = f(x) + hy, for f F, h R . Component (a) is bounded using Theorem 1 and Theorem 2 of Hu, Kallus, and Mao [14] showing that with probability at least 1 δ, ν log (|Z | + 1) log(5/δ) Component (b) is bounded by Eq. (8) in the proof of Lemma 3.2. Combining (a) and (b) components proves the result. E.7 Proof of Theorem 3.8 Proof. Both proofs follow the same general strategy. We start with the first statement. Let i=1 ˆℓb h (f(Xi), Yi) and L(f) = E [ℓ(f(X), Y )] Since the ˆfb minimizes Ln(f) over F and f OR minimizes L(f) over F, we see, L( ˆfb) L(f OR) = L( ˆfb) Ln( ˆf) + Ln( ˆfb) Ln(f OR) + Ln(f OR) L(f OR) Ln( ˆf) Ln(f OR) | {z } 0, by optimality of ˆ f +2 sup f F |Ln(f) L(f)| 2 sup f F |Ln(f) L(f)| where the first inequality holds by taking the supremum of the first two and last two pairs, and the second inequality holds by optimality of ˆf. Taking the expectation of both sides, we see ERegret( ˆfb) 2E sup f F |Ln(f) L(f)| To compute the expectation, we see by Theorem 3.4 and choosing h = q B L Rn(F) that sup f F |Ln(f) L(f)| p B3LRn(F) + B with probability at least 1 δ. Rearranging, we have sup f F |Ln(f) L(f)| p By tail integration over t and adding back p B3LRn(F), it follows that ERegret( ˆfb) 2E sup f F |Ln(f) L(f)| B3LRn(F) + B n, completing the proof of the first statement. We now proceed to the second statement. We follow the same line of argument until Eq. (10). Then, we instead use Theorem 3.7 with h = 1 L n 1 L to obtain sup f F |Ln(f) L(f)| C B ν log (|Z | + 1) log(1/δ) n for some universal constant C0 with probability at least 1 δ. Rearranging we have sup f F |Ln(f) L(f)| t C2 0B2ν log (|Z | + 1) Applying the tail integral gives us ERegret( ˆfb) 2E sup f F |Ln(f) L(f)| ν log (|Z | + 1) completing the proof. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: The introduction includes a bullet pointed Contributions" subsection, and each bullet point contains a forward reference to the relevant theorem or section of the paper. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss how our method applies to some but not all non-linear problems (pg. 1 and Appendix A), the computational challenges of a non-convex surrogate (pg. 3), and the need for more work on what is the best" zeroth order gradient (pg. 10). Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: Most proofs appear in the supplemental material. The main body provides the key intuition behind each result and all theorems have formal statements with explicit assumptions. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: Our supplemental materials provide python code that leverages the (public) package Py EPO (https://github.com/khalil-research/Py EPO). Together one can generate both the data used in our experiments, run our algorithm and each of the benchmarks. All experiments are also described in detail in the main body (Section 4), with some implementation specific details relegated to the appendix. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: All code necessary to reproduce our experiments is included in teh supplemental materials. Our experiments only use synthetic data, and code for generating that data is also included in supplemental materials. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Please see Section 4 and Appendix C. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: Each plot contains this information in the caption. For example, see Fig. 8. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: See Appendix C. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: We have read and discussed the code of ethics collectively as authors and are confident we have not violated it. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: Our work is primarily foundational research in the field of decision-aware learning, providing the first theoretically grounded approach in mispecified settings. As foundational work, it has no direct societal impacts. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Again, as foundational work, our results do not pose such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: The only existing asset used in our paper the Py EPO package, available under MIT License. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: We do not provide any assets with the paper, only code to assist reproducibility. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our paper does not contain human subject or crowdsourcing research. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our paper does not contain any human subjects or crowdsourcing experiments. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.