# differentiable_convex_optimization_layers__493d4fc1.pdf Differentiable Convex Optimization Layers Akshay Agrawal Stanford University akshayka@cs.stanford.edu Brandon Amos Facebook AI bda@fb.com Shane Barratt Stanford University sbarratt@stanford.edu Stephen Boyd Stanford University boyd@stanford.edu Steven Diamond Stanford University diamond@cs.stanford.edu J. Zico Kolter Carnegie Mellon University Bosch Center for AI zkolter@cs.cmu.edu Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver s solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for convex optimization, and additionally implement differentiable layers for disciplined convex programs in Py Torch and Tensor Flow 2.0. Our implementation significantly lowers the barrier to using convex optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work. 1 Introduction Recent work has shown how to differentiate through specific subclasses of convex optimization problems, which can be viewed as functions mapping problem data to solutions [6, 31, 10, 1, 4]. These layers have found several applications [40, 6, 35, 27, 5, 53, 75, 52, 12, 11], but many applications remain relatively unexplored (see, e.g., [4, 8]). While convex optimization layers can provide useful inductive bias in end-to-end models, their adoption has been slowed by how difficult they are to use. Existing layers (e.g., [6, 1]) require users to transform their problems into rigid canonical forms by hand. This process is tedious, error-prone, and time-consuming, and often requires familiarity with convex analysis. Domain-specific languages (DSLs) for convex optimization abstract away the process of converting problems to canonical forms, letting users specify problems in a natural syntax; programs are then lowered to canonical forms and Authors listed in alphabetical order. 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. supplied to numerical solvers behind-the-scenes [3]. DSLs enable rapid prototyping and make convex optimization accessible to scientists and engineers who are not necessarily experts in optimization. The point of this paper is to do what DSLs have done for convex optimization, but for differentiable convex optimization layers. In this work, we show how to efficiently differentiate through disciplined convex programs [45]. This is a large class of convex optimization problems that can be parsed and solved by most DSLs for convex optimization, including CVX [44], CVXPY [29, 3], Convex.jl [72], and CVXR [39]. Concretely, we introduce disciplined parametrized programming (DPP), a grammar for producing parametrized disciplined convex programs. Given a program produced by DPP, we show how to obtain an affine map from parameters to problem data, and an affine map from a solution of the canonicalized problem to a solution of the original problem. We refer to this representation of a problem i.e., the composition of an affine map from parameters to problem data, a solver, and an affine map to retrieve a solution as affine-solver-affine (ASA) form. Our contributions are three-fold: 1. We introduce DPP, a new grammar for parametrized convex optimization problems, and ASA form, which ensures that the mapping from problem parameters to problem data is affine. DPP and ASA-form make it possible to differentiate through DSLs for convex optimization, without explicitly backpropagating through the operations of the canonicalizer. We present DPP and ASA form in 4. 2. We implement the DPP grammar and a reduction from parametrized programs to ASA form in CVXPY 1.1. We also implement differentiable convex optimization layers in Py Torch [66] and Tensor Flow 2.0 [2]. Our software substantially lowers the barrier to using convex optimization layers in differentiable programs and neural networks ( 5). 3. We present applications to sensitivity analysis for linear machine learning models, and to learning control-Lyapunov policies for stochastic control ( 6). We also show that for quadratic programs (QPs), our layer s runtime is competitive with Opt Net s specialized solver qpth [6] ( 7). 2 Related work DSLs for convex optimization. DSLs for convex optimization allow users to specify convex optimization problems in a natural way that follows the math. At the foundation of these languages is a ruleset from convex analysis known as disciplined convex programming (DCP) [45]. A mathematical program written using DCP is called a disciplined convex program, and all such programs are convex. Disciplined convex programs can be canonicalized to cone programs by expanding each nonlinear function into its graph implementation [43]. DPP can be seen as a subset of DCP that mildly restricts the way parameters (symbolic constants) can be used; a similar grammar is described in [26]. The techniques used in this paper to canonicalize parametrized programs are similar to the methods used by code generators for optimization problems, such as CVXGEN [60], which targets QPs, and QCML, which targets second-order cone programs (SOCPs) [26, 25]. Differentiation of optimization problems. Convex optimization problems do not in general admit closed-form solutions. It is nonetheless possible to differentiate through convex optimization problems by implicitly differentiating their optimality conditions (when certain regularity conditions are satisfied) [36, 68, 6]. Recently, methods were developed to differentiate through convex cone programs in [24, 1] and [4, 7.3]. Because every convex program can be cast as a cone program, these methods are general. The software released alongside [1], however, requires users to express their problems in conic form. Expressing a convex optimization problem in conic form requires a working knowledge of convex analysis. Our work abstracts away conic form, letting the user differentiate through high-level descriptions of convex optimization problems; we canonicalize these descriptions to cone programs on the user s behalf. This makes it possible to rapidly experiment with new families of differentiable programs, induced by different kinds of convex optimization problems. Because we differentiate through a cone program by implicitly differentiating its solution map, our method can be paired with any algorithm for solving convex cone programs. In contrast, methods that differentiate through every step of an optimization procedure must be customized for each algorithm (e.g., [33, 30, 56]). Moreover, such methods only approximate the derivative, whereas we compute it analytically (when it exists). 3 Background Convex optimization problems. A parametrized convex optimization problem can be represented as minimize f0(x; ) subject to fi(x; ) 0, i = 1, . . . , m1, gi(x; ) = 0, i = 1, . . . , m2, where x 2 Rn is the optimization variable and 2 Rp is the parameter vector [22, 4.2]. The functions fi : Rn ! R are convex, and the functions gi : Rn ! R are affine. A solution to (1) is any vector x? 2 Rn that minimizes the objective function, among all choices that satisfy the constraints. The problem (1) can be viewed as a (possibly multi-valued) function that maps a parameter to solutions. In this paper, we consider the case when this solution map is single-valued, and we denote it by S : Rp ! Rn. The function S maps a parameter to a solution x?. From the perspective of end-to-end learning, (or parameters it depends on) is learned in order to minimize some scalar function of x?. In this paper, we show how to obtain the derivative of S with respect to , when (1) is a DPP-compliant program (and when the derivative exists). We focus on convex optimization because it is a powerful modeling tool, with applications in control [20, 16, 71], finance [57, 19], energy management [63], supply chain [17, 15], physics [51, 8], computational geometry [73], aeronautics [48], and circuit design [47, 21], among other fields. Disciplined convex programming. DCP is a grammar for constructing convex optimization problems [45, 43]. It consists of functions, or atoms, and a single rule for composing them. An atom is a function with known curvature (affine, convex, or concave) and per-argument monotonicities. The composition rule is based on the following theorem from convex analysis. Suppose h : Rk ! R is convex, nondecreasing in arguments indexed by a set I1 {1, 2, . . . , k}, and nonincreasing in arguments indexed by I2. Suppose also that gi : Rn ! R are convex for i 2 I1, concave for i 2 I2, and affine for i 2 (I1 \ I2)c. Then the composition f(x) = h(g1(x), g2(x), . . . , gk(x)) is convex. DCP allows atoms to be composed so long as the composition satisfies this composition theorem. Every disciplined convex program is a convex optimization problem, but the converse is not true. This is not a limitation in practice, because atom libraries are extensible (i.e., the class corresponding to DCP is parametrized by which atoms are implemented). In this paper, we consider problems of the form (1) in which the functions fi and gi are constructed using DPP, a version of DCP that performs parameter-dependent curvature analysis (see 4.1). Cone programs. A (convex) cone program is an optimization problem of the form minimize c T x subject to b Ax 2 K, (2) where x 2 Rn is the variable (there are several other equivalent forms for cone programs). The set K Rm is a nonempty, closed, convex cone, and the problem data are A 2 Rm n, b 2 Rm, and c 2 Rn. In this paper we assume that (2) has a unique solution. Our method for differentiating through disciplined convex programs requires calling a solver (an algorithm for solving an optimization problem) in the forward pass. We focus on the special case in which the solver is a conic solver. A conic solver targets convex cone programs, implementing a function s : Rm n Rm Rn ! Rn mapping the problem data (A, b, c) to a solution x?. DCP-based DSLs for convex optimization can canonicalize disciplined convex programs to equivalent cone programs, producing the problem data A, b, c, and K [3]; (A, b, c) depend on the parameter and the canonicalization procedure. These data are supplied to a conic solver to obtain a solution; there are many high-quality implementations of conic solvers (e.g., [64, 9, 32]). 4 Differentiating through disciplined convex programs We consider a disciplined convex program with variable x 2 Rn, parametrized by 2 Rp; its solution map can be viewed as a function S : Rp ! Rn that maps parameters to the solution (see 3). In this section we describe the form of S and how to evaluate DT S, allowing us to backpropagate through parametrized disciplined convex programs. (We use the notation Df(x) to denote the derivative of a function f evaluated at x, and DT f(x) to denote the adjoint of the derivative at x.) We consider the special case of canonicalizing a disciplined convex program to a cone program. With little extra effort, our method can be extended to other targets. We express S as the composition R s C; the canonicalizer C maps parameters to cone problem data (A, b, c), the cone solver s solves the cone problem, furnishing a solution x?, and the retriever R maps x? to a solution x? of the original problem. A problem is in ASA form if C and R are affine. By the chain rule, the adjoint of the derivative of a disciplined convex program is DT S( ) = DT C( )DT s(A, b, c)DT R( x?). The remainder of this section proceeds as follows. In 4.1, we present DPP, a ruleset for constructing disciplined convex programs reducible to ASA form. In 4.2, we describe the canonicalization procedure and show how to represent C as a sparse matrix. In 4.3, we review how to differentiate through cone programs, and in 4.4, we describe the form of R. 4.1 Disciplined parametrized programming DPP is a grammar for producing parametrized disciplined convex programs from a set of functions, or atoms, with known curvature (constant, affine, convex, or concave) and per-argument monotonicities. A program produced using DPP is called a disciplined parametrized program. Like DCP, DPP is based on the well-known composition theorem for convex functions, and it guarantees that every function appearing in a disciplined parametrized program is affine, convex, or concave. Unlike DCP, DPP also guarantees that the produced program can be reduced to ASA form. A disciplined parametrized program is an optimization problem of the form minimize f0(x, ) subject to fi(x, ) fi(x, ), i = 1, . . . , m1, gi(x, ) = gi(x, ), i = 1, . . . , m2, where x 2 Rn is a variable, 2 Rp is a parameter, the fi are convex, fi are concave, gi and gi are affine, and the expressions are constructed using DPP. An expression can be thought of as a tree, where the nodes are atoms and the leaves are variables, constants, or parameters. A parameter is a symbolic constant with known properties such as sign but unknown numeric value. An expression is said to be parameter-affine if it does not have variables among its leaves and is affine in its parameters; an expression is parameter-free if it is not parametrized, and variable-free if it does not have variables. Every DPP program is also DCP, but the converse is not true. DPP generates programs reducible to ASA form by introducing two restrictions on expressions involving parameters: 1. In DCP, we classify the curvature of each subexpression appearing in the problem description as convex, concave, affine, or constant. All parameters are classified as constant. In DPP, parameters are classified as affine, just like variables. 2. In DCP, the product atom φprod(x, y) = xy is affine if x or y is a constant (i.e., variable-free). Under DPP, the product is affine when at least one of the following is true: x or y is constant (i.e., both parameter-free and variable-free); one of the expressions is parameter-affine and the other is parameter-free. The DPP specification can (and may in the future) be extended to handle several other combinations of expressions and parameters. Example. Consider the program minimize k Fx gk2 + λkxk2 subject to x 0, (4) with variable x 2 Rn and parameters F 2 Rm n, g 2 Rm, and λ > 0. If k k2, the product, negation, and the sum are atoms, then this problem is DPP-compliant: φprod(F, x) = Fx is affine because the atom is affine (F is parameter-affine and x is parameter-free) and F and x are affine; Fx g is affine because Fx and g are affine and the sum of affine expressions is affine; k Fx gk2 is convex because k k2 is convex and convex composed with affine is convex; φprod(λ, kxk2) is convex because the product is affine (λ is parameter-affine, kxk2 is parameter-free), it is increasing in kxk2 (because λ is nonnegative), and kxk2 is convex; the objective is convex because the sum of convex expressions is convex. Non-DPP transformations of parameters. It is often possible to re-express non-DPP expressions in DPP-compliant ways. Consider the following examples, in which the pi are parameters: The expression φprod(p1, p2) is not DPP because both of its arguments are parametrized. It can be rewritten in a DPP-compliant way by introducing a variable s, replacing p1p2 with the expression p1s, and adding the constraint s = p2. Let e be an expression. The quotient e/p1 is not DPP, but it can be rewritten as ep2, where p2 is a new parameter representing 1/p1. The expression log |p1| is not DPP because log is concave and increasing but | | is convex. It can be rewritten as log p2 where p2 is a new parameter representing |p1|. If P1 2 Rn n is a parameter representing a (symmetric) positive semidefinite matrix and x 2 Rn is a variable, the expression φquadform(x, P1) = x T P1x is not DPP. It can be rewritten as k P2xk2 2, where P2 is a new parameter representing P 1/2 4.2 Canonicalization The canonicalization of a disciplined parametrized program to ASA form is similar to the canonicalization of a disciplined convex program to a cone program. All nonlinear atoms are expanded into their graph implementations [43], generating affine expressions of variables. The resulting expressions are also affine in the problem parameters due to the DPP rules. Because these expressions represent the problem data for the cone program, the function C from parameters to problem data is affine. As an example, the DPP program (4) can be canonicalized to the cone program minimize t1 + λt2 subject to (t1, Fx g) 2 Qm+1, (t2, x) 2 Qn+1, x 2 Rn where (t1, t2, x) is the variable, Qn is the n-dimensional second-order cone, and Rn + is the nonnegative orthant. When rewritten in the standard form (2), this problem has data , K = Qm+1 Qn+1 Rn with blank spaces representing zeros and the horizontal line denoting the cone boundary. In this case, the parameters F, g and λ are just negated and copied into the problem data. The canonicalization map. The full canonicalization procedure (which includes expanding graph implementations) only runs the first time the problem is canonicalized. When the same problem is canonicalized in the future (e.g., with new parameter values), the problem data (A, b, c) can be obtained by multiplying a sparse matrix representing C by the parameter vector (and reshaping); the adjoint of the derivative can be computed by just transposing the matrix. The naïve alternative expanding graph implementations and extracting new problem data every time parameters are updated (and differentiating through this algorithm in the backward pass) is much slower (see 7). The following lemma tells us that C can be represented as a sparse matrix. Lemma 1. The canonicalizer map C for a disciplined parametrized program can be represented with a sparse matrix Q 2 Rn p+1 and sparse tensor R 2 Rm n+1 p+1, where m is the dimension of the constraints. Letting 2 Rp+1 denote the concatenation of and the scalar offset 1, the problem data can be obtained as c = Q and [A b] = Pp+1 i=1 R[:,:,i] i. The proof is given in Appendix A. 4.3 Derivative of a conic solver By applying the implicit function theorem [36, 34] to the optimality conditions of a cone program, it is possible to compute its derivative Ds(A, b, c). To compute DT s(A, b, c), we follow the methods presented in [1] and [4, 7.3]. Our calculations are given in Appendix B. If the cone program is not differentiable at a solution, we compute a heuristic quantity, as is common practice in automatic differentiation [46, 14]. In particular, at non-differentiable points, a linear system that arises in the computation of the derivative might fail to be invertible. When this happens, we compute a least-squares solution to the system instead. See Appendix B for details. 4.4 Solution retrieval The cone program obtained by canonicalizing a DPP-compliant problem uses the variable x = (x, s) 2 Rn Rk, where s 2 Rk is a slack variable. If x? = (x?, s?) is optimal for the cone program, then x? is optimal for the original problem (up to reshaping and scaling by a constant). As such, a solution to the original problem can be obtained by slicing, i.e., R( x?) = x?. This map is evidently linear. 5 Implementation We have implemented DPP and the reduction to ASA form in version 1.1 of CVXPY, a Pythonembedded DSL for convex optimization [29, 3]; our implementation extends CVXCanon, an opensource library that reduces affine expression trees to matrices [62]. We have also implemented differentiable convex optimization layers in Py Torch and Tensor Flow 2.0. These layers implement the forward and backward maps described in 4; they also efficiently support batched inputs (see 7). We use the the diffcp package [1] to obtain derivatives of cone programs. We modified this package for performance: we ported much of it from Python to C++, added an option to compute the derivative using a dense direct solve, and made the forward and backward passes amenable to parallelization. Our implementation of DPP and ASA form, coupled with our Py Torch and Tensor Flow layers, makes our software the first DSL for differentiable convex optimization layers. Our software is open-source. CVXPY and our layers are available at https://www.cvxpy.org, https://www.github.com/cvxgrp/cvxpylayers. Example. Below is an example of how to specify the problem (4) using CVXPY 1.1. 1 import cvxpy as cp 2 3 m, n = 20, 10 4 x = cp.Variable ((n, 1)) 5 F = cp.Parameter ((m, n)) 6 g = cp.Parameter ((m, 1)) 7 lambd = cp.Parameter ((1, 1), nonneg=True) 8 objective_fn = cp.norm(F @ x - g) + lambd * cp.norm(x) 9 constraints = [x >= 0] 10 problem = cp.Problem(cp.Minimize(objective_fn), constraints) 11 assert problem.is_dpp () The below code shows how to use our Py Torch layer to solve and backpropagate through problem (the code for our Tensor Flow layer is almost identical; see Appendix D). Figure 1: Gradients (black lines) of the logistic test loss with respect to the training data. Figure 2: Per-iteration cost while learning an ADP policy for stochastic control. 1 import torch 2 from cvxpylayers.torch import Cvxpy Layer 3 4 F_t = torch.randn(m, n, requires_grad=True) 5 g_t = torch.randn(m, 1, requires_grad=True) 6 lambd_t = torch.rand(1, 1, requires_grad=True) 7 layer = Cvxpy Layer( 8 problem , parameters =[F, g, lambd], variables =[x]) 9 x_star , = layer(F_t , g_t , lambd_t) 10 x_star.sum().backward () Constructing layer in line 7-8 canonicalizes problem to extract C and R, as described in 4.2. Calling layer in line 9 applies the map R s C from 4, returning a solution to the problem. Line 10 computes the gradients of summing x_star, with respect to F_t, g_t, and lambd_t. In this section, we present two applications of differentiable convex optimization, meant to be suggestive of possible use cases for our layer. We give more examples in Appendix E. 6.1 Data poisoning attack We are given training data (xi, yi)N i=1, where xi 2 Rn are feature vectors and yi 2 {0, 1} are the labels. Suppose we fit a model for this classification problem by solving minimize 1 N i=1 ( ; xi, yi) + r( ), (6) where the loss function ( ; xi, yi) is convex in 2 Rn and r( ) is a convex regularizer. We hope that the test loss Ltest( ) = 1 M i=1 ( ; xi, yi) is small, where ( xi, yi)M i=1 is our test set. Assume that our training data is subject to a data poisoning attack [18, 49], before it is supplied to us. The adversary has full knowledge of our modeling choice, meaning that they know the form of (6), and seeks to perturb the data to maximally increase our loss on the test set, to which they also have access. The adversary is permitted to apply an additive perturbation δi 2 Rn to each of the training points xi, with the perturbations satisfying kδik1 0.01. Let ? be optimal for (6). The gradient of the test loss with respect to a training data point, rxi Ltest( ?)).gives the direction in which the point should be moved to achieve the greatest increase in test loss. Hence, one reasonable adversarial policy is to set xi := xi + (.01)sign(rxi Ltest( ?)). The quantity (0.01) PN i=1 krxi Ltest( ?)k1 is the predicted increase in our test loss due to the poisoning. Numerical example. We consider 30 training points and 30 test points in R2, and we fit a logistic model with elastic-net regularization. This problem can be written using DPP, with xi as parameters Table 1: Time (ms) to canonicalize examples, across 10 runs. Logistic regression Stochastic control CVXPY 1.0.23 18.9 1.75 12.5 0.72 CVXPY 1.1 1.49 0.02 1.39 0.02 (see Appendix C for the code). We used our convex optimization layer to fit this model and obtain the gradient of the test loss with respect to the training data. Figure 1 visualizes the results. The orange (?) and blue (+) points are training data, belonging to different classes. The red line (dashed) is the hyperplane learned by fitting the the model, while the blue line (solid) is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set. 6.2 Convex approximate dynamic programming We consider a stochastic control problem of the form minimize lim T !1E 2 + kφ(xt)k2 subject to xt+1 = Axt + Bφ(xt) + !t, t = 0, 1, . . . , where xt 2 Rn is the state, φ : Rn ! U Rm is the policy, U is a convex set representing the allowed set of controls, and !t 2 is a (random, i.i.d.) disturbance. Here the variable is the policy φ, and the expectation is taken over disturbances and the initial state x0. If U is not an affine set, then this problem is in general very difficult to solve [50, 13]. ADP policy. A common heuristic for solving (7) is approximate dynamic programming (ADP), which parametrizes φ and replaces the minimization over functions φ with a minimization over parameters. In this example, we take U to be the unit ball and we represent φ as a quadratic control-Lyapunov policy [74]. Evaluating φ corresponds to solving the SOCP minimize u T Pu + x T t Qu + q T u subject to kuk2 1, (8) with variable u and parameters P, Q, q, and xt. We can run stochastic gradient descent (SGD) on P, Q, and q to approximately solve (7), which requires differentiating through (8). Note that if u were unconstrained, (7) could be solved exactly, via linear quadratic regulator (LQR) theory [50]. The policy (8) can be written using DPP (see Appendix C for the code). Numerical example. Figure 2 plots the estimated average cost for each iteration of gradient descent for a numerical example, with x 2 R2 and u 2 R3, a time horizon of T = 25, and a batch size of 8. We initialize our policy s parameters with the LQR solution, ignoring the constraint on u. This method decreased the average cost by roughly 40%. 7 Evaluation Our implementation substantially lowers the barrier to using convex optimization layers. Here, we show that our implementation substantially reduces canonicalization time. Additionally, for dense problems, our implementation is competitive (in execution time) with a specialized solver for QPs; for sparse problems, our implementation is much faster. Canonicalization. Table 1 reports the time it takes to canonicalize the logistic regression and stochastic control problems from 6, comparing CVXPY version 1.0.23 with CVXPY 1.1. Each canonicalization was performed on a single core of an unloaded Intel i7-8700K processor. We report the average time and standard deviation across 10 runs, excluding a warm-up run. Our extension achieves on average an order-of-magnitude speed-up since computing C via a sparse matrix multiply is much more efficient than going through the DSL. (a) Dense QP, batch size of 128. (b) Sparse QP, batch size of 32. Figure 3: Comparison of our Py Torch Cvxpy Layer to qpth, over 10 trials. For cvxpylayers, we separate out the canonicalization and solution retrieval times, to allow for a fair comparison. Comparison to specialized layers. We have implemented a batched solver and backward pass for our differentiable CVXPY layer that makes it competitive with the batched QP layer qpth from [6]. Figure 3 compares the runtimes of our Py Torch Cvxpy Layer and qpth on a dense and sparse QP. The sparse problem is too large for qpth to run in GPU mode. The QPs have the form minimize 1 2x T Qx + p T x subject to Ax = b, Gx h, with variable x 2 Rn, and problem data Q 2 Rn n, p 2 Rn, A 2 Rm n, b 2 Rm, G 2 Rp n, and h 2 Rp. The dense QP has n = 128, m = 0, and p = 128. The sparse QP has n = 1024, m = 1024, and p = 1024 and Q, A, and G each have 1% nonzeros (See Appendix E for the code). We ran this experiment on a machine with a 6-core Intel i7-8700K CPU, 32 GB of memory, and an Nvidia Ge Force 1080 TI GPU with 11 GB of memory. Our implementation is competitive with qpth for the dense QP, even on the GPU, and roughly 5 times faster for the sparse QP. Our backward pass for the dense QP uses our extension to diffcp; we explicitly materialize the derivatives of the cone projections and use a direct solve. Our backward pass for the sparse QP uses sparse operations and LSQR [65], significantly outperforming qpth (which cannot exploit sparsity). Our layer runs on the CPU, and implements batching via Python multi-threading, with a parallel for loop over the examples in the batch for both the forward and backward passes. We used 12 threads for our experiments. 8 Discussion Other solvers. Solvers that are specialized to subclasses of convex programs are often faster than more general conic solvers. For example, one might use OSQP [69] to solve QPs, or gradient-based methods like L-BFGS [54] or SAGA [28] for empirical risk minimization. Because CVXPY lets developers add specialized solvers as additional back-ends, our implementation of DPP and ASA form can be easily extended to other problem classes. We plan to interface QP solvers in future work. Nonconvex problems. It is possible to differentiate through nonconvex problems, either analytically [37, 67, 5] or by unrolling SGD [33, 14, 61, 41, 70, 23, 38], Because convex programs can typically be solved efficiently and to high accuracy, it is preferable to use convex optimization layers over nonconvex optimization layers when possible. This is especially true in the setting of low-latency inference. The use of differentiable nonconvex programs in end-to-end learning pipelines, discussed in [42], is an interesting direction for future research. Acknowledgments We gratefully acknowledge discussions with Eric Chu, who designed and implemented a code generator for SOCPs [26, 25], Nicholas Moehle, who designed and implemented a basic version of a code generator for convex optimization in unpublished work, and Brendan O Donoghue. We also would like to thank the anonymous reviewers, who provided us with useful suggestions that improved the paper. S. Barratt is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. [1] A. Agrawal, S. Barratt, S. Boyd, E. Busseti, and W. Moursi. Differentiating through a cone program. In: Journal of Applied and Numerical Optimization 1.2 (2019), pp. 107 115. [2] A. Agrawal, A. N. Modi, A. Passos, A. Lavoie, A. Agarwal, A. Shankar, I. Ganichev, J. Levenberg, M. Hong, R. Monga, and S. Cai. Tensor Flow Eager: A multi-stage, Pythonembedded DSL for machine learning. In: Proc. Systems for Machine Learning. 2019. [3] A. Agrawal, R. Verschueren, S. Diamond, and S. Boyd. A rewriting system for convex optimization problems. In: Journal of Control and Decision 5.1 (2018), pp. 42 60. [4] B. Amos. Differentiable optimization-based modeling for machine learning. Ph D thesis. Carnegie Mellon University, 2019. [5] B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter. Differentiable MPC for end-to-end planning and control. In: Advances in Neural Information Processing Systems. 2018, pp. 8299 8310. [6] B. Amos and J. Z. Kolter. Opt Net: Differentiable optimization as a layer in neural networks. In: Intl. Conf. Machine Learning. 2017. [7] B. Amos, V. Koltun, and J. Z. Kolter. The limited multi-label projection layer. 2019. ar Xiv: 1906.08707. [8] G. Angeris, J. Vuˇckovi c, and S. Boyd. Computational Bounds for Photonic Design. In: ACS Photonics 6.5 (2019), pp. 1232 1239. [9] M. Ap S. MOSEK optimization suite. http://docs.mosek.com/9.0/intro.pdf. 2019. [10] S. Barratt. On the differentiability of the solution to convex optimization problems. 2018. ar Xiv: 1804.05098. [11] S. Barratt and S. Boyd. Fitting a kalman smoother to data. 2019. ar Xiv: 1910.08615. [12] S. Barratt and S. Boyd. Least squares auto-tuning. 2019. ar Xiv: 1904.05460. [13] S. Barratt and S. Boyd. Stochastic control with affine dynamics and extended quadratic costs. 2018. ar Xiv: 1811.00168. [14] D. Belanger, B. Yang, and A. Mc Callum. End-to-end learning for structured prediction energy networks. In: Intl. Conf. Machine Learning. 2017. [15] A. Ben-Tal, B. Golany, A. Nemirovski, and J.-P. Vial. Retailer-supplier flexible commitments contracts: A robust optimization approach. In: Manufacturing & Service Operations Management 7.3 (2005), pp. 248 271. [16] D. P. Bertsekas. Dynamic programming and optimal control. 3rd ed. Vol. 1. Athena scientific Belmont, 2005. [17] D. Bertsimas and A. Thiele. A robust optimization approach to supply chain management. In: Proc. Intl. Conf. on Integer Programming and Combinatorial Optimization. Springer. 2004, pp. 86 100. [18] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In: Pattern Recognition 84 (2018), pp. 317 331. [19] S. Boyd, E. Busseti, S. Diamond, R. Kahn, K. Koh, P. Nystrup, and J. Speth. Multi-period trading via convex optimization. In: Foundations and Trends in Optimization 3.1 (2017), pp. 1 76. [20] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalities in system and control theory. SIAM, 1994. [21] S. Boyd, S.-J. Kim, D. Patil, and M. Horowitz. Digital circuit optimization via geometric programming. In: Operations Research 53.6 (2005). [22] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [23] P. Brakel, D. Stroobandt, and B. Schrauwen. Training energy-based models for time-series imputation. In: Journal of Machine Learning Research 14.1 (2013), pp. 2771 2797. [24] E. Busseti, W. Moursi, and S. Boyd. Solution refinement at regular points of conic problems. 2018. ar Xiv: 1811.02157. [25] E. Chu and S. Boyd. QCML: Quadratic Cone Modeling Language. https://github.com/ cvxgrp/qcml. 2017. [26] E. Chu, N. Parikh, A. Domahidi, and S. Boyd. Code generation for embedded second-order cone programming. In: 2013 European Control Conference (ECC). IEEE. 2013, pp. 1547 1552. [27] F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter. End-to-end differentiable physics for learning and control. In: Advances in Neural Information Processing Systems. 2018, pp. 7178 7189. [28] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In: Advances in Neural Information Processing Systems. 2014, pp. 1646 1654. [29] S. Diamond and S. Boyd. CVXPY: A Python-embedded modeling language for convex optimization. In: Journal of Machine Learning Research 17.1 (2016), pp. 2909 2913. [30] S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein. Unrolled optimization with deep priors. 2017. ar Xiv: 1705.08041. [31] J. Djolonga and A. Krause. Differentiable learning of submodular models. In: Advances in Neural Information Processing Systems. 2017, pp. 1013 1023. [32] A. Domahidi, E. Chu, and S. Boyd. ECOS: An SOCP solver for embedded systems. In: Control Conference (ECC), 2013 European. IEEE. 2013, pp. 3071 3076. [33] J. Domke. Generic methods for optimization-based modeling. In: AISTATS. Vol. 22. 2012, pp. 318 326. [34] A. L. Dontchev and R. T. Rockafellar. Implicit functions and solution mappings. In: Springer Monogr. Math. (2009). [35] P. Donti, B. Amos, and J. Z. Kolter. Task-based end-to-end model learning in stochastic optimization. In: Advances in Neural Information Processing Systems. 2017, pp. 5484 5494. [36] A. Fiacco and G. Mc Cormick. Nonlinear programming: Sequential unconstrained minimization techniques. John Wiley and Sons, Inc., New York-London-Sydney, 1968, pp. xiv+210. [37] A. V. Fiacco. Introduction to sensitivity and stability analysis in nonlinear programming. Vol. 165. Mathematics in Science and Engineering. Academic Press, Inc., Orlando, FL, 1983, pp. xii+367. [38] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In: 34th Intl. Conf. Machine Learning-Volume 70. JMLR. org. 2017, pp. 1126 1135. [39] A. Fu, B. Narasimhan, and S. Boyd. CVXR: An R package for disciplined convex optimization. In: ar Xiv preprint ar Xiv:1711.07582 (2017). [40] Z. Geng, D. Johnson, and R. Fedkiw. Coercing machine learning to output physically accurate results. 2019. ar Xiv: 1910.09671 [physics.comp-ph]. [41] I. Goodfellow, M. Mirza, A. Courville, and Y. Bengio. Multi-prediction deep Boltzmann machines. In: Advances in Neural Information Processing Systems. 2013, pp. 548 556. [42] S. Gould, R. Hartley, and D. Campbell. Deep declarative networks: A new hope. 2019. ar Xiv: 1909.04866. [43] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs. In: Recent Advances in Learning and Control. Ed. by V. Blondel, S. Boyd, and H. Kimura. Lecture Notes in Control and Information Sciences. Springer, 2008, pp. 95 110. [44] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx. 2014. [45] M. Grant, S. Boyd, and Y. Ye. Disciplined convex programming. In: Global optimization. Springer, 2006, pp. 155 210. [46] A. Griewank and A. Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. SIAM, 2008. [47] M. Hershenson, S. Boyd, and T. Lee. Optimal design of a CMOS op-amp via geometric programming. In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20.1 (2001), pp. 1 21. [48] W. Hoburg and P. Abbeel. Geometric programming for aircraft design optimization. In: AIAA Journal 52.11 (2014), pp. 2414 2426. [49] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In: IEEE Symposium on Security and Privacy. IEEE. 2018, pp. 19 35. [50] R. Kalman. When is a linear control system optimal? In: Journal of Basic Engineering 86.1 (1964), pp. 51 60. [51] Y. Kanno. Nonsmooth Mechanics and Convex Optimization. CRC Press, Boca Raton, FL, 2011. [52] K. Lee, S. Maji, A. Ravichandran, and S. Soatto. Meta-learning with differentiable convex optimization. In: ar Xiv preprint ar Xiv:1904.03758 (2019). [53] C. K. Ling, F. Fang, and J. Z. Kolter. What game are we playing? End-to-end learning in normal and extensive form games. 2018. ar Xiv: 1805.02777. [54] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. In: Mathematical programming 45.1-3 (1989), pp. 503 528. [55] C. Malaviya, P. Ferreira, and A. F. Martins. Sparse and constrained attention for neural machine translation. 2018. ar Xiv: 1805.08241. [56] M. Mardani, Q. Sun, S. Vasawanala, V. Papyan, H. Monajemi, J. Pauly, and D. Donoho. Neural proximal gradient descent for compressive imaging. 2018. ar Xiv: 1806.03963 [cs.CV]. [57] H. Markowitz. Portfolio selection. In: Journal of Finance 7.1 (1952), pp. 77 91. [58] A. Martins and R. Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In: Intl. Conf. Machine Learning. 2016, pp. 1614 1623. [59] A. F. Martins and J. Kreutzer. Learning what s easy: Fully differentiable neural easy-first taggers. In: 2017 Conference on Empirical Methods in Natural Language Processing. 2017, pp. 349 362. [60] J. Mattingley and S. Boyd. CVXGEN: A code generator for embedded convex optimization. In: Optimization and Engineering 13.1 (2012), pp. 1 27. [61] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. 2016. ar Xiv: 1611.02163. [62] J. Miller, J. Zhu, and P. Quigley. CVXCanon. https://github.com/cvxgrp/CVXcanon/. 2015. [63] N. Moehle, E. Busseti, S. Boyd, and M. Wytock. Dynamic energy management. 2019. ar Xiv: 1903.06230. [64] B. O Donoghue, E. Chu, N. Parikh, and S. Boyd. SCS: Splitting conic solver, version 2.1.0. https://github.com/cvxgrp/scs. 2017. [65] C. C. Paige and M. A. Saunders. LSQR: An algorithm for sparse linear equations and sparse least squares. In: ACM Transactions on Mathematical Software (TOMS) 8.1 (1982), pp. 43 71. [66] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De Vito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in Py Torch. In: NIPS Autodiff Workshop (2017). [67] H. Pirnay, R. López-Negrete, and L. T. Biegler. Optimal sensitivity based on IPOPT. In: Mathematical Programming Computation 4.4 (2012), pp. 307 331. [68] S. Robinson. Strongly regular generalized equations. In: Mathematics of Operations Research 5.1 (1980), pp. 43 62. [69] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd. OSQP: An operator splitting solver for quadratic programs. 2017. ar Xiv: 1711.08013. [70] V. Stoyanov, A. Ropson, and J. Eisner. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In: AISTATS. 2011, pp. 725 733. [71] E. Todorov, T. Erez, and Y. Tassa. Mu Jo Co: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. 2012, pp. 5026 5033. [72] M. Udell, K. Mohan, D. Zeng, J. Hong, S. Diamond, and S. Boyd. Convex optimization in Julia. In: SC14 Workshop on High Performance Technical Computing in Dynamic Languages (2014). ar Xiv: 1410.4821 [math.OC]. [73] M. Van Kreveld, O. Schwarzkopf, M. de Berg, and M. Overmars. Computational geometry algorithms and applications. Springer, 2000. [74] Y. Wang and S. Boyd. Fast evaluation of quadratic control-Lyapunov policy. In: IEEE Transactions on Control Systems Technology 19.4 (2010), pp. 939 946. [75] B. Wilder, B. Dilkina, and M. Tambe. Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization. 2018. ar Xiv: 1809.05504. [76] Y. Ye, M. J. Todd, and S. Mizuno. An O( n L)-iteration homogeneous and self-dual linear programming algorithm. In: Mathematics of Operations Research 19.1 (1994), pp. 53 67.