# online_linear_quadratic_control__08c71433.pdf Online Linear Quadratic Control Alon Cohen 1 2 Avinatan Hassidim 1 3 Tomer Koren 4 Nevena Lazic 4 Yishay Mansour 1 5 Kunal Talwar 4 We study the problem of controlling linear timeinvariant systems with known noisy dynamics and adversarially chosen quadratic losses. We present the first efficient online learning algorithms in this setting that guarantee O( T) regret under mild assumptions, where T is the time horizon. Our algorithms rely on a novel SDP relaxation for the steady-state distribution of the system. Crucially, and in contrast to previously proposed relaxations, the feasible solutions of our SDP all correspond to strongly stable policies that mix exponentially fast to a steady state. 1. Introduction Linear-quadratic (LQ) control is one of the most widely studied problems in control theory (Anderson et al., 1972; Bertsekas, 1995; Zhou et al., 1996). It has been applied successfully to problems in statistics, econometrics, robotics, social science and physics. In recent years, it has also received much attention from the machine learning community, as increasingly difficult control problems have led to demand for data-driven control systems (Abbeel et al., 2007; Levine et al., 2016; Sheckells et al., 2017). In LQ control, both the state and action are real-valued vectors. The dynamics of the environment are linear in the state and action, and are perturbed by Gaussian noise. The cost is quadratic in the state and control (action) vectors. The optimal control policy, which minimizes the cost, selects the control vector as a linear function of the state vector, and can be derived by solving the algebraic Ricatti equations. The main focus of this work is control of linear systems whose quadratic costs vary in an unpredictable way. This problem may arise in settings such as building climate control 1Google Research, Tel Aviv 2Technion Israel Inst. of Technology 3Bar Ilan University 4Google Brain, Mountain View 5Tel Aviv University. Correspondence to: Alon Cohen , Tomer Koren . Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). in the presence of time-varying energy costs, due to energy auctions or unexpected demand fluctuations. To measure how well a control system adapts to time-varying costs, it is common to consider the notion of regret: the difference between the total cost of the controller, one that is only aware of previously observed costs, and that of the best fixed control policy in hindsight. This notion has been thoroughly studied in the context of online learning, and particularly in that of online convex optimization (Cesa-Bianchi & Lugosi, 2006; Hazan, 2016; Shalev-Shwartz, 2012). LQ control was considered in the context of regret by Abbasi-Yadkori et al. (2014), who give a learning algorithm for the problem of tracking an adversarially changing target in a system with noiseless linear dynamics. In this paper we consider online learning with fixed, known, linear dynamics and adversarially chosen quadratic cost matrices. Our main results are two online algorithm that achieve O( T) regret, when comparing to any fast mixing linear policy.1 One of our algorithms is based on Online Gradient Descent (Zinkevich, 2003). The other is based on Follow the Lazy Leader (Kalai & Vempala, 2005), a variant of Follow the Perturbed Leader with only O( T) expected number of policy switches. Overall, our approach follows Even-Dar et al. (2009). We first show how to perform online learning in an idealized setting , a hypothetical setting in which the learner can immediately observe the steady-state cost of any chosen control policy. We proceed to bound the gap between the idealized costs and the actual costs. Our technique is conceptually different to most learning problems: instead of predicting a policy and observing its steady-state cost, the learner predicts a steady-state distribution and derives from it a corresponding policy. Importantly, this view allows us to cast the idealized problem as a semidefinite program which minimizes the expected costs as a function of a steady state distribution (of both states and controls). As the problem is now convex, we apply OGD and FLL to the SDP and argue about fast-mixing properties of its feasible solutions. 1 Technically, we define the class of strongly stable policies that guarantee the desired fast mixing property. Conceptually, slowly mixing policies are less attractive for implementation, given their inherent gap between their long and short term cost. Online Linear Quadratic Control For online gradient descent, we define a sequential strong stability property that couples consecutive control matrices, and show that it guarantees that the observed state distributions closely track those generated in the idealized setting. We then show that the sequence of policies generated by the online gradient descent algorithm satisfies this property. In Follow the Lazy Leader, following each switch our algorithm resets the system a process that takes a constant number of rounds, after which the cost of playing the new policy is less than its steady-state cost. The holy grail of reinforcement learning is controlling a dynamical stochastic system under uncertainty, and clearly both MDPs and LQ control are well within this mission statement. There are obvious differences between the two models: MDPs model discrete state and action dynamics while LQ control addresses continuous linear dynamics with a quadratic cost. In this work we are inspired by methodologies from online-MDP and regret minimization to derive new results for LQ control. We believe that exploring the interface between the two will be fruitful for both sides, and holds significant potential for future RL research agenda. 1.1. Related Work LQ control can be seen as a continuous analogue of the discrete Markov Decision Process (MDP) model. As such, our results are conceptually similar to those of Even-Dar et al. (2009), who derive regret bounds for MDPs with known dynamics and changing rewards. However, our technical approach and the derivation of our algorithms are very different than those applicable in context of MDPs. Among the many follow-up works to Even-Dar et al. (2009), let us note Yu et al. (2009) and Abbasi et al. (2013) that propose lazy algorithms similar to our second algorithm. We remark that, compared to our O( T) regret bounds, Abbasi Yadkori et al. (2014) give an O(log2 T) regret bound under much stronger assumptions.2 Similar bounds are established by Neu & Gómez (2017) for online learning in linearly solvable MDPs, that were shown to capture appropriately discretized versions of LQ control systems (Todorov, 2009). In light of these results, it is interesting to investigate whether our bounds are tight or can actually be improved. We leave this investigation for future work. An orthogonal line of research that has gained popularity in recent years is controlling linear quadratic systems with unknown fixed dynamics. The majority of recent papers deal with off-policy learning: either by policy gradient (Fazel et al., 2018); by estimating the transition matrices (Dean et al., 2017); or by improper learning (Hazan et al., 2017; 2Not only their setting assumes that Qt = Q and Rt = I for all t for a fixed and known matrix Q 0, they also make non-trivial norm assumptions on the corresponding optimal control matrix K . Arora et al., 2018). In contrast to that, Abbasi-Yadkori & Szepesvári (2011) and Ibrahimi et al. (2012) present an on-policy learning algorithm with O( Semidefinite programming for LQ control has been used in the past (Balakrishnan & Vandenberghe, 2003; Dvijotham et al., 2013; Lee & Hu, 2016), mostly in the context of infinite-horizon constrained LQRs (Lee & Khargonekar, 2007; Schildbach et al., 2015). In many of these formulations, one has to solve the SDP exactly to obtain a stabilizing solution; in other words, only the optimal policy is known to be stable and suboptimal policies need not be stabilizing. This is not the case in our SDP formulation, as any feasible solution is not only stable but, in fact, strongly-stable (see the formal definition in Section 3). 2. Background 2.1. Linear Quadratic Control The standard linear quadratic (Gaussian) control problem is as follows. Let xt Rd be the system state at time t and let ut Rk be the control (action) taken at time t. The system transitions to the next state using linear time-invariant dynamics xt+1 = Axt + But + wt , where wt are i.i.d. Gaussian noise vectors with zero mean and covariance W 0 . The cost incurred at each time point is a quadratic function of the state and control, x T t Qxt + u T t Rut, for positive definite matrices Q and R. A policy is a mapping π : Rd 7 Rk from the current state xt to a control (i.e., an action) ut. The cost of a policy after T time steps is JT(π) = E h T X t=1 x T t Qxt + u T t Rut i , where u1, . . ., u T are chosen according to π; the expectation is w.r.t. the randomness in the state transitions and (possibly) the policy. In the infinite-horizon version of the problem, the goal is to minimize the steady-state cost J(π) = lim T (1/T)JT(π). In the infinite-horizon setting and when the system is controllable,3 it is well-known that the optimal policy is given by constant linear feedback ut = Kxt. For the optimal K, the dynamics are given by xt+1 = (A + BK)xt + wt, and K is guaranteed to be stable; a policy K is called stable if ρ(A + BK) < 1, where for a matrix M, ρ(M) is the spectral radius of M. In this case, xt converges to a steady-state (stationary) distribution, i.e., xt has the 3The system is controllable if the matrix (B AB Ad 1B) has full column-rank. Under the controllability assumption, any state can be reached in at most d steps (ignoring noise). Online Linear Quadratic Control same distribution as (A + BK)xt + wt. This implies that E[xt] = 0, and the covariance matrix X = E[xt x T t ] satisfies X = (A + BK)X(A + BK)T + W. The steady-state cost of a stable policy K with steady-state covariance X is given by J(K) = (Q + KTRK) X. Here denotes element-wise inner product, i.e., A B = Tr(ATB). 2.2. Problem Setting We consider an online setting, where a sequence of positive definite cost matrices Q1, . . ., QT, R1, . . ., RT is chosen by the environment ahead of time and unknown to the learner. We assume throughout that Tr(Qt), Tr(Rt) C for all t, for some constant C > 0. We assume that the dynamics (A, B) are time-invariant and known, and that the system is initialized at x0 = 0. At each time step t, the learner observes the state xt, chooses an action ut, and suffers cost x T t Qt xt + u T t Rtut. Thereafter, the system transitions to the next state. A (randomized) learning algorithm A is a mapping from xt and the previous cost matrices Q0, ..., Qt 1 and R0, . . ., Rt 1 to a distribution over a control ut. We define the cost of an algorithm as JT(A) = E[PT t=1 x T t Qt xt + u T t Rtut], where u1, . . ., u T are chosen at random according to A. The goal of the learner is to minimize the regret, defined as: RT(A) = JT(A) min π Π JT(π) , where Π is a set of benchmark policies. In the sequel, we fix Π to be the set of all strongly stable policies; we defer the formal definition of this class of policies to Section 3 below. 3. Strong Stability In this section we formalize the notion of a strongly stable policy and discuss some of its properties. Intuitively, a strongly stable policy is a policy that exhibits fast mixing and converges quickly to a steady-state distribution. Note that, while stable policies K (for which ρ(A + BK) < 1) necessarily converge to a steady-state, nothing is guaranteed regarding their rate of convergence. The following definition helps remedy that. Definition 3.1 (Strong Stability). A policy K is (κ, γ)- strongly stable (for κ > 0 and 0 < γ 1) if K κ, and there exists matrices L and H such that A + BK = HLH 1, with L 1 γ and H H 1 κ. Strong-stability is a quantitative version of stability, in the sense that any stable policy is strongly-stable for some κ and γ (See Lemma B.1 in the supplementary material). Conversely, strong-stability implies stability: if K is stronglystable then A + BK is similar to a matrix L with L < 1, and so ρ(A + BK) = ρ(L) L < 1, i.e., K is stable. Notice that for a strongly stable K it may not be the case that A + BK < 1, and a non-trivial transformation H , I may be required to make the norm smaller than one (this is indeed the case with feasible solutions to our SDP relaxation). Strong stability ensures exponentially fast convergence to steady-state, as is made precise in the next lemma. Lemma 3.2. For all t = 1, 2, . . . let b Xt be the state covariance matrix on round t starting from some b X0 0 and following a (κ, γ)-strongly stable policy π(x) = Kx. Then b X1, b X2, . . . approaches a steady-state covariance matrix X, and further, for all t it holds that b Xt X κ2e 2γt b X0 X . This exponential convergence is true even if the policy is randomized and follows K in expectation; that is, if E[π(x)|x] = Kx, and provided that Cov[π(x)|x] is finite. Proof. Let us first analyze deterministic policies. As noted above, we know that K is stable and as a result the state covariances b Xt approach a steady-state covariance X. By definition, we have b Xt+1 = (A + BK)b Xt(A + BK)T + W t 0; X = (A + BK)X(A + BK)T + W. Subtracting the equations and recursing, we have b Xt X = (A + BK)t(b X0 X)((A + BK)t)T, which gives b Xt X (A + BK)t 2 b X0 X . For further bounding the right-hand side, observe that (A + BK)t = HLt H 1, thus (A + BK)t H H 1 L t κ(1 γ)t κe γt. Combining the inequalities gives the result for deterministic policies. For randomized policies with E[u|x] = Kx and finite V = Cov[u|x], the dynamics of the state covariance take the form b Xt+1 = (A + BK)b Xt(A + BK)T + BVBT + W t 0; X = (A + BK)X(A + BK)T + BVBT + W. Since the analysis above only depends on the difference between the equations, the added BVBT term has no effect on the convergence of Xt. Note, however, that the steady state X itself will be a function of V in general. Let us state one more property of strongly stable policies that will be useful in our analysis. Lemma 3.3. Assume that K is (κ, γ)-strongly stable, and let X and U be the covariances of x and u at steady-state when following K. Then Tr(X) (κ2/γ) Tr(W) and Tr(U) (κ4/γ) Tr(W). Online Linear Quadratic Control 3.1. Sequential strong stability We next present a stronger notion of strong stability which plays a central role in our analysis. Roughly speaking, the goal is to argue about fast mixing when following a sequence of different policies K1, K2, . . . (rather than a fixed policy K throughout). In this case, for any kind of mixing to take place, not only does one has to require that each policy is strongly stable, but also that the sequence is slowly changing. This motivates the following definition. Definition 3.4 (sequential strong stability). A sequence of policies K1, . . ., KT is (κ, γ)-strongly stable (for κ > 0 and 0 < γ 1) if there exist matrices H1, . . ., HT and L1, . . ., LT such that A + BKt = Ht Lt H 1 t for all t, with the following properties: (i) Lt 1 γ and Kt κ; (ii) Ht β and H 1 t 1/α with κ = β/α; (iii) H 1 t+1Ht 1 + γ/2. Strongly stable sequences mix quickly, in the following sense (proof is deferred to the full version of the paper). Lemma 3.5. Let πt(x) = Kt x (t = 1, 2, . . .) be a sequence of policies with respective steady-state covariance matrices X1, X2, . . ., such that K1, K2, . . . is a (κ, γ)-strongly stable sequence and Xt Xt 1 η for all t, for some η > 0. Let b Xt be the state covariance matrix on round t starting from some b X1 0 and following this sequence. Then b Xt+1 Xt+1 κ2e γt b X1 X1 + 2ηκ2 The same is true even if the policies are randomized, such that E[πt(x)|x] = Kt x and Cov[πt(x)|x] exists and is finite. 4. SDP Relaxation for LQ control We now present our SDP relaxation for the infinite-horizon LQ control problem. Our presentation requires the following definitions. Consider an LQ control problem parameterized by matrices A, B, Q, R and W. For any stable policy (for which a steady-state distribution exists), define E(π) = E xx T xu T where x is distributed according to the steady-state distribution of π, and u = π(x). Then, the infinite horizon cost of π is given by J(π) = ( Q 0 0 R ) E(π). For a policy πK(x) = Kx defined by a stable control matrix K (i.e., for which ρ(A + BK) < 1), this matrix takes the form E(K) = X XKT where X is the state covariance at steady-state. (We slightly abuse notation and write E(K) instead of E(πK)). In this case, one also has J(K) = J(E(K)) = (Q + KTRK) X. 4.1. The relaxation We can now present our SDP relaxation for the LQ control problem given by (A, B, Q, R, W), which takes the form: minimize J(Σ) = Q 0 0 R subject to Σxx = A B Σ A B T + W, (3) Σ 0, Tr(Σ) ν. Here, ν > 0 is a parameter whose value will be determined later, and Σ is a (d + k) (d + k) symmetric matrix that decomposes to blocks as follows: Σ = Σxx Σxu ΣT xu Σuu where Σxx is a d d block, Σuu is k k, and Σxu is d k. The program Eq. (3) is a relaxation in the following sense. Lemma 4.1. For any stable policy π such that at steady-state E x 2 + E u 2 ν, the matrix Σ = E(π) is feasible for (3). Proof. Let π be any stable policy and consider the matrix Σ = E(π). Then Σ 0 (by definition, recall Eq. (1)), and satisfies the equality constraint of (3), since if x is at steady-state and u = π(x), then Ax + Bu + w has the same distribution as x for w N(0, W) independent of x and u, thus E[xx T] = E[(Ax + Bu + w)(Ax + Bu + w)T]; the latter is equivalent to Σxx = (A B)Σ(A B)T +W. Finally, observe that Tr(Σ) = E Tr(xx T) + E Tr(uu T) = E x 2 + E u 2 where x, u are distributed according to the steady-state distribution of π, hence Σ satisfies the trace constraint. 4.2. Extracting a policy We next show that from any feasible solution to the SDP, one can extract a stable policy with the same (if not better) cost, provided that W 0. For any feasible solution Σ for the SDP, define a control matrix as follows: K(Σ) = ΣT xuΣ 1 xx. (4) Note that, due to the equality constraint of the SDP, our assumption W 0 ensures that Σxx 0, thus Σxx is nonsingular and K(Σ) is well defined. Theorem 4.2. Let Σ be any feasible solution to the SDP, and let K = K(Σ). Then the policy π(x) = Kx is stable, and it holds that E(K) Σ. In particular, E(K) is also feasible for the SDP and its cost is at most that of Σ. Without the trace constraint, the theorem particularly implies that for the optimal solution Σ of the SDP, the corresponding control matrix K = K(Σ ) is an optimal policy for the original problem, recovering a classic result in control theory. Online Linear Quadratic Control Proof of Theorem 4.2. Our first step is to show that Σ Σ = Σxx Σxx KT KΣxx KΣxx KT To see this, observe that by definition of K = K(Σ) we have Σ = Σ + 0 0 0 Σuu ΣT uxΣ 1 xxΣux Thus, it suffices to show that Σuu ΣT uxΣ 1 xxΣux is PSD. The latter matrix is the Schur complement of Σ, and is PSD because Σ is PSD. Next, we show that the control matrix K gives rise to a stable policy. Let us develop Eq. (3). First, since W 0 we also have that Σxx 0. Moreover, by Eq. (5), Σxx = (A B)Σ(A B)T + W (A + BK)Σxx(A + BK)T + W (A + BK)Σxx(A + BK)T . Let λ and v be a (possibly complex) eigenvalue and lefteigenvector associated with A + BK. Then, v Σxxv > v (A + BK)Σxx(A + BK)Tv = |λ|2v Σxxv , which, by v Σxxv > 0, implies |λ| < 1. This is true for all eigenvalues λ, and shows that ρ(A + BK) < 1, that is, K is stable. Finally, let us show that E(K) Σ , which together with Eq. (5) would imply our claim E(K) Σ. Denote by X the state covariance at steady-state when following K; then, E(K) = X XKT To establish that E(K) Σ it is enough to show X Σxx. To this end, let = Σxx X and write X + (A + BK)X(A + BK)T + W + (A + BK) (A + BK)T = X + (A + BK) (A + BK)T , from which we get (A + BK) (A + BK)T. Applying the latter inequality recursively, we obtain (A + BK)n ((A + BK)T)n . Recall that ρ(A + BK) < 1; thus, taking the limit as n , we get (A+ BK)n ((A+ BK)T)n 0, which implies 0. This shows that X Σxx, as required. To complete the proof observe that E(K) is feasible for the SDP since E(K) Σ and Σ is feasible. Furthermore, since ( Q 0 0 R ) is PSD, we have J(E(K)) = Q 0 0 R E(K) Q 0 0 R 4.3. Strong stability of solutions Let us show that from a solution to the SDP one can extract a strongly stable policy. Lemma 4.3. Assume that W σ2I and let κ = ν/σ. Then for any feasible solution Σ for the SDP, the policy K = K(Σ) is (κ, 1/2κ2)-strongly stable. Proof. According to Theorem 4.2, the policy K is (weakly) stable and the matrix bΣ = E(K) is feasible for the SDP. Let X = bΣxx be the state covariance of K at steady-state. Since bΣ is feasible, and since W σ2I, we have X (A + BK)X(A + BK)T + σ2I. (6) In particular, this means that X σ2I. On the other hand, we have Tr(X) Tr(bΣ) ν, thus X νI. Overall, σ2I X νI. (7) Given that X is nonsingular, we can define L = X 1/2(A + BK)X1/2. Multiplying Eq. (6) by X 1/2 from both sides, we obtain I LLT + σ2X 1 LLT + κ 2I. Thus LLT (1 κ 2)I, so L 1 κ 2 1 κ 2/2. Also, Eq. (7) shows that X1/2 X 1/2 κ. It is left to establish the bound on the norm K F. To this end, use the fact that X KKT = Tr(KXKT) = Tr(bΣuu) ν together with X σ2I (recall Eq. (7)) to obtain σ2 K 2 F ν, that is, K F κ. We can also prove an analogous statement for sequences of feasible solutions, provided that they change slowly enough (we defer the proof to the full version of the paper). Lemma 4.4. Assume that W σ2I and let κ = ν/σ. Let Σ1, Σ2, . . . be a sequence of feasible solutions of (3), and suppose that Σt+1 Σt η for all t for some η σ2/κ2. Then the sequence K1, K2, . . ., where Kt = K(Σt) for all t is (κ, 1/2κ2)-strongly stable. 5. Online LQ Control In this section we describe our gradient based algorithm for online LQ control, presented in Algorithm 1. The algorithm maintains an ideal steady-state covariance matrix Σt by performing online gradients steps directly on the SDP we formulated in Section 4 (with the linear cost functions changing from round to round). Then, a control matrix Kt is extracted from the covariance Σt and is used to generate a prediction. Notice that the predictions made by the algorithm are randomly drawn from the Gaussian N(Kt xt,Vt), and only follow the extracted policies K1, K2, ... in expectation. This randomization step is crucial for the algorithm to exhibit fast Online Linear Quadratic Control Algorithm 1 Online LQ Controller Parameter: η, ν > 0 Initialize Σ1 = In n with n = d + k for t = 1, 2, . . . do Receive state xt Compute Kt = (Σt)ux(Σt) 1 xx, Vt = (Σt)uu Kt(Σt)xx KT t Predict ut N(Kt xt,Vt); receive Qt, Rt Update: Σt+1 = ΠS Σt η Qt 0 0 Rt , where ΠS is the Frobenius-norm projection onto Σ Rn n Σ 0, Tr(Σ) ν, Σxx = A B Σ A B T + W mixing: sampling the prediction from a distribution with the right covariance ensures the observed covariance matrices converge to those generated by the algorithm, and consequently this sequence mixes more quickly. For Algorithm 1 we prove the following guarantee. Theorem 5.1. Assume that Tr(W) λ2 and W σ2I. Given κ > 0 and 0 γ < 1, set ν = 2κ4λ2/γ and η = σ3/(2C νT). The expected regret of Algorithm 1 compared to any (κ, γ)-strongly stable control matrix K is at most JT(A) JT(K ) = O κ10λ5 provided that T 8κ4λ2/(γσ2). We remark that the theorem (in fact, Algorithm 1 itself) tacitly assumes that the SDP defined by S is feasible; otherwise, the set of strongly-stable policies is empty and the statement of Theorem 5.1 is vacuous. Proof. Fix an arbitrary (κ, γ)-strongly stable control matrix K , and denote by bΣ 1, . . .,bΣ T be the covariances induced by using K throughout. Also, let bΣ1, . . .,bΣT be the actual observed covariance matrices induced by the algorithm. Denoting Lt = Qt 0 0 Rt , the expected regret of the algorithm can be then written as follows: t=1 Lt (bΣt bΣ t ) = T X t=1 Lt (bΣt Σt) t=1 Lt (Σt Σ ) (8) t=1 Lt (Σ bΣ t ). Observe that the sequence Σ1, . . ., ΣT generated by the algorithm is feasible for the (feasibility) SDP described by the set S. Thanks to Lemma 4.3, for any feasible Σ S the corresponding control matrix K(Σ) is ( κ, γ)-strongly stable, for κ = ν/σ and γ = σ2/2ν; in particular, this applies to each of the matrices Σt. We proceed by bounding each of the sums on the right-hand side of Eq. (8). We start with the second term and use a well-known regret bound for the Online Gradient Descent algorithm, due to Zinkevich (2003). Lemma 5.2. We have t=1 Lt (Σt Σ ) 4ν2 Additionally, the Σt are slowly changing in the sense that, for all t, Σt+1 Σt F 4Cη. (9) We next bound the first term, now relying on Eq. (9) and the fact that the sequence of (randomized) policies chosen by Algorithm 1 is strongly stable. Lemma 5.3. If η σ2/4C κ2, it holds that t=1 Lt (bΣt Σt) 16C2 κ4 γ ηT + 4C κ4 Finally, the last term in Eq. (8) can be bounded using the strong stability of K . Lemma 5.4. For any (κ, γ)-strongly stable K , t=1 Lt (Σ bΣ t ) 2C κ4ν The theorem now follows by plugging in the bounds we established in Lemmas 5.2 to 5.4 into Eq. (8) and setting our choices of η and ν. (See the full version of the paper for details.) 6. Oracle-based Algorithm In this section we present a different approach that is based on Follow the Lazy Leader (Kalai & Vempala, 2005). In contrast to Algorithm 1, this approach does not require a lower bound on the noise but rather relies on occasionally performing resets, and needs a bound on the cost of this reset (this is established in the full version of the paper under reasonable assumptions). We assume access to an Oracle procedure that receives cost matrices Q, R, and parameter ν > 0. It returns a control matrix K that minimizes the steady-state cost, subject to Tr(X) + Tr(KXKT) ν, where X is the steady-state covariance matrix associated with K.4 4Oracle can be implemented by solving the SDP in Section 4. Online Linear Quadratic Control Algorithm 2 Follow the Lazy Leader Parameter: η, ν > 0, transition matrices A, B, distribution µ. Sample Qp 1 Rd d,Rp 1 Rk k from dµ. Set b Q1 0, b R1 0 for t = 1, 2, . . . do Receive state xt. Compute Kt Oracle(b Qt + Qp t , b Rt + Qp t , ν). Predict ut Kt xt. Receive Qt,Rt. Update b Qt+1 = b Qt + Qt, b Rt+1 = b Rt + Rt. With probability min n 1, dµ(Qp t Qt,Rp t Rt) dµ(Qp t ,Rp t ) Qp t+1 Qp t Qt. Rp t+1 Rp t Rt, else, perform reset and set Qp t+1 Qp t . Rp t+1 Rp t . end for Algorithm 2 is similar to Follow the Perturbed Leader, and in fact behaves the same in expectation. At every round t, Oracle is called using the sum of previously seen Qs and Rs plus an additional random noise, Qp t and Rp t . Oracle returns a matrix Kt that is used to choose ut = Kt xt. For the measure dµ, we use the joint measure over symmetric matrices Q and R, whose upper triangle is sampled coordinate-wise i.i.d from Laplace(1/η). The "lazyness" of the algorithm stems from Qp 1, . . ., Qp T and Rp 1, . . ., Rp T being sampled dependently over time such that the cumulative perturbed loss only changes with small probability between rounds. Consequently, the expected number of switches of K as well as the expected number of resets are only O(ηT). The reset step in the algorithm, informally, drives the system to zero at some cost. Here we assume that B has full columnrank in which case we can reset in one step. In the full version of the paper, we show how resetting can be done over a sequence of steps under much weaker assumptions. Observation 6.1. Suppose that B has full column-rank. Resetting the system in round t can be done by setting ut = B Axt, such that at the next round xt+1 = wt+1. Moreover, the expected cost of the reset is at most Cν(1 + B A 2). For Algorithm 2 we will show the following regret bound. Theorem 6.2. Assume that Tr(W) λ2, and suppose that the cost of a reset is at most Cr. Then for ν = 2κ4λ2/γ, the expected regret of Algorithm 2 against any (κ, γ)-stronglystable control matrix K satisfies E JT(A) JT(K ) = O (d + k)3/4p Cν(Cr + Cν)T . Remark 6.3. Oracle requires that the matrices Q and R are PSD. Nonetheless, we invoke Oracle using the perturbed cumulative loss ( ˆQt + Qp t , ˆRt + Rp t ) that might not be PSD, as the perturbations Qp t and Rp t themselves are typically not PSD. To solve this issue, we first notice that with highprobability (Vershynin, 2010), we have Qp t O(d/η) and Rp t O(k/η). Therefore, to guarantee that the perturbed cumulative loss is PSD, we can add an initial large pretend loss by setting b Q1 = (d/η)I and b R1 = (k/η)I. This would contribute an O(Cν(d + k)/η) term to the regret which ensures that, by our choice of η, Theorem 6.2 still holds. Proof of Theorem 6.2. Let b X1, . . ., b XT be the actual observed covariance matrices induced by Algorithm 2. Also, let b X 1, . . ., b X T be the covariances induced by using a fixed control matrix K throughout. Similarly, define X1, . . ., XT to be the covariance matrices of the steady-state distributions induced by K1, . . ., KT respectively, and X that of K . As in the analysis of OGD, the expected regret can be decomposed as follows: t=1 (Qt + KT t Rt Kt) b Xt (Qt + (K )TRt K ) b X t t=1 (Qt + KT t Rt Kt) (b Xt Xt) t=1 (Qt + KT t Rt Kt) Xt (Qt + (K )TRt K ) X t=1 (Qt + (K )TRt K ) (X b X t ). (10) The second term in Eq. (10), the regret in the idealized setting , is bounded due to Kalai & Vempala (2005). It requires the additional observation that, by Lemma 3.3, we have Tr(X ) + Tr(K X (K )T) ν. Lemma 6.4. Assume Tr(Qt), Tr(Rt) C for all t. Then, t=1 Tr(Xt(Qt + KT t Rt Kt)) Tr(X (Qt + (K )TRt K )) d + k T + 16ν(d + k) Moreover, the probability that the algorithm changes Kt and performs a reset at any step t is at most ηC The third term of Eq. (10) is bounded by 2Cκ4ν/γ due to Lemma 5.4. It remains to bound the first term in the equation. To that end, we will next show that after the system is reset, the cost of the learner on round t is at most that of the steady-state induced by Kt. Lemma 6.5. Suppose the learner starts playing K at state xt0 = wt0. Then the expected cost of the learner is always less then the steady-state cost induced by K. Online Linear Quadratic Control Figure 1. Data center cooling loop; see Section 7. Proof. Let xt0 = wt0, and recall that xt+1 = (A+ BK)xt +wt. Let b Xt be the covariance of xt, and X be the covariance of x at the steady-state induced by K. Then, Xt0 = (A + BKt0)Xt0(A + BKt0)T + W. We now show that b Xt X for all t t0 by induction. Indeed, for the base case b Xt0 = W (A+BKt0)Xt0(A+BKt0)T+W = X. Now assume that b Xt Xt0, that implies b Xt+1 = (A + BKt0)b Xt(A + BKt0)T + W (A + BKt0)Xt0(A + BKt0)T + W = X . Since Qt + KT t Rt Kt is PSD, the expected cost of the learner at time t is (Qt + KT t Rt Kt) Xt (Qt + KT t Rt Kt) X. Combining Lemmas 6.4 and 6.5 obtains the theorem (see the full version of the paper for more details). 7. Experiments We demonstrate our approach on the problem of regulating conditions inside a data center (DC) server floor in the presence of time-varying power costs. We learn system dynamics from a real data center, but vary the costs and run algorithms in simulation. Fig. 1 shows a schematic of the cooling loop of a typical data center. Water is cooled to sub-ambient temperatures in the chiller and evaporative cooling towers, and then sent to multiple air handling units (AHUs) on the server floor. Server racks are arranged into rows with alternating hot and cold aisles, such that all hot air exhausts face the hot aisle. The AHUs circulate air through the building; hot air is cooled through air-water heat exchange and blown into the cold aisle, and the resulting warm water is sent back to the chiller and cooling towers. The primary goal of floor-level cooling is to control the cold aisle temperatures (CATs) and differential air pressures (DPs). The control vector includes the blower speed and water valve command for each of n = 30 AHUs, set every 30s. The state vector includes 2n temperature measurements and n pressure measurements, as well as sensor measurements and controls for the preceding time step. System noise is in part due to variability in server loads and the temperature of the chilled water. Figure 2. Normalized regret RT /T for FLL and Recent strategies, with power costs generated uniformly (top) and by random walk (bottom). Resets occur at time steps indicated by dashed lines. We learn a linear approximation (A, B) of the dynamics in the operating range of interest on 4h of exploratory data with controls following a random walk. We estimate the system noise covariance W as the empirical covariance of training data residuals. For the purpose of the experiment, we amplify the noise by a factor of 5. We set the diagonal coefficients of Qt corresponding to the most recent (normalized) sensor measurements to 1 and remaining coefficients to 0, and keep Qt = Q constant throughout the experiment. We set diagonal coefficients of Rt corresponding to water usage (valve command) to 1 throughout, and all coefficients corresponding to power usage (fan speed) to rt. We generate rt by (a) i.i.d sampling a uniform distribution on [0.1, 1], and (b) using a random walk restricted to [0.1, 1] taking steps of size 0.1, 0.1, 0 with probabilities 0.1, 0.1, 0.8 respectively. We run the FLL algorithm on this problem with the following modifications: we set Qp 1 = Q, and Rp 1 = Ik, an upper bound on Rt. Rather than executing hard resets to 0, we perform a soft reset by running a policy Kreset for n steps. Here Kreset is similar to the next FLL policy, but based on the 1.1 times the corresponding state cost Q. We compare the cost of FLL to that of a fixed linear controller that is based on the average of the Rt matrices, and to a Recent strategy which selects one of ten controllers corresponding to power costs in r {0.1, 0.2, ..., 1} based on the most recently observed Rt. The normalized regret 1 T RT of to the two strategies is shown in Fig. 2. FLL performance quickly approaches that of the fixed linear policy in both cases, and is better than the Recent strategy on uniform random costs. The Recent strategy has an advantage in the case where costs vary slowly, and empirical performance of FLL could likely be improved in this case by forgetting the old costs. Online Linear Quadratic Control Abbasi, Yasin, Bartlett, Peter L, Kanade, Varun, Seldin, Yevgeny, and Szepesvári, Csaba. Online learning in markov decision processes with adversarially chosen transition probability distributions. In Advances in neural information processing systems, pp. 2508 2516, 2013. Abbasi-Yadkori, Yasin and Szepesvári, Csaba. Regret bounds for the adaptive control of linear quadratic systems. In Proceedings of the 24th Annual Conference on Learning Theory, pp. 1 26, 2011. Abbasi-Yadkori, Yasin, Bartlett, Peter, and Kanade, Varun. Tracking adversarial targets. In International Conference on Machine Learning, pp. 369 377, 2014. Abbeel, Pieter, Coates, Adam, Quigley, Morgan, and Ng, Andrew Y. An application of reinforcement learning to aerobatic helicopter flight. In Advances in neural information processing systems, pp. 1 8, 2007. Abeille, Marc and Lazaric, Alessandro. Thompson sampling for linear-quadratic control problems. In AISTATS, 2017. Anderson, BDO, Moore, JB, and Molinari, BP. Linear optimal control. IEEE Transactions on Systems, Man, and Cybernetics, (4):559 559, 1972. Arora, Sanjeev, Hazan, Elad, Lee, Holden, Singh, Karan, Zhang, Cyril, and Zhang, Yi. Towards provable control for unknown linear dynamical systems. International Conference on Learning Representations, 2018. URL https:// openreview.net/forum?id=Bygp Qlb A-. workshop track. Åström, Karl Johan and Wittenmark, Björn. On self tuning regulators. Automatica, 9(2):185 199, 1973. Auer, Peter and Ortner, Ronald. Logarithmic online regret bounds for undiscounted reinforcement learning. In Advances in Neural Information Processing Systems, pp. 49 56, 2007. Balakrishnan, Venkataramanan and Vandenberghe, Lieven. Semidefinite programming duality and linear timeinvariant systems. IEEE Transactions on Automatic Control, 48(1):30 41, 2003. Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 1. Athena scientific Belmont, MA, 1995. Bittanti, Sergio and Campi, Marco C. Adaptive control of linear time invariant systems: the bet on the best principle. Communications in Information & Systems, 6(4):299 320, 2006. Bradtke, Steven J. Reinforcement learning applied to linear quadratic regulation. In Advances in neural information processing systems, pp. 295 302, 1993. Campi, Marco C and Kumar, PR. Adaptive linear quadratic gaussian control: the cost-biased approach revisited. SIAM Journal on Control and Optimization, 36(6):1890 1907, 1998. Cesa-Bianchi, Nicolo and Lugosi, Gábor. Prediction, learning, and games. Cambridge university press, 2006. Dean, Sarah, Mania, Horia, Matni, Nikolai, Recht, Benjamin, and Tu, Stephen. On the sample complexity of the linear quadratic regulator. ar Xiv preprint ar Xiv:1710.01688, 2017. Dvijotham, Krishnamurthy, Todorov, Emanuel, and Fazel, Maryam. Convex control design via covariance minimization. In Communication, Control, and Computing (Allerton), 2013 51st Annual Allerton Conference on, pp. 93 99. IEEE, 2013. Even-Dar, Eyal, Kakade, Sham M, and Mansour, Yishay. Online markov decision processes. Mathematics of Operations Research, 34(3):726 736, 2009. Fazel, Maryam, Ge, Rong, Kakade, Sham M, and Mesbahi, Mehran. Global convergence of policy gradient methods for linearized control problems. ar Xiv preprint ar Xiv:1801.05039, 2018. Gao, Jim and Jamidar, Ratnesh. Machine learning applications for data center optimization. Google White Paper, 2014. Hazan, Elad. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4):157 325, 2016. Hazan, Elad, Singh, Karan, and Zhang, Cyril. Learning linear dynamical systems via spectral filtering. In Advances in Neural Information Processing Systems, pp. 6705 6715, 2017. Ibrahimi, Morteza, Javanmard, Adel, and Roy, Benjamin V. Efficient reinforcement learning for high dimensional linear quadratic systems. In Advances in Neural Information Processing Systems 25, pp. 2636 2644. Curran Associates, Inc., 2012. Kalai, Adam and Vempala, Santosh. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291 307, 2005. Lee, Dong-Hwan and Hu, Jianghai. A semidefinite programming formulation of the lqr problem and its dual. 2016. Lee, Ji-Woong and Khargonekar, Pramod P. Constrained infinite-horizon linear quadratic regulation of discretetime systems. IEEE Transactions on Automatic Control, 52(10):1951 1958, 2007. Online Linear Quadratic Control Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334 1373, 2016. Lewis, Frank L and Vrabie, Draguna. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE circuits and systems magazine, 9(3), 2009. Neu, Gergely and Gómez, Vicenç. Fast rates for online learning in linearly solvable markov decision processes. Proceedings of Machine Learning Research vol, 65:1 22, 2017. Schildbach, Georg, Goulart, Paul, and Morari, Manfred. Linear controller design for chance constrained systems. Automatica, 51:278 284, 2015. Shalev-Shwartz, Shai. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107 194, 2012. Sheckells, Matthew, Garimella, Gowtham, and Kobilarov, Marin. Robust policy search with applications to safe vehicle navigation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 2343 2349. IEEE, 2017. Todorov, Emanuel. Efficient computation of optimal actions. Proceedings of the national academy of sciences, 106 (28):11478 11483, 2009. Vershynin, Roman. Introduction to the non-asymptotic analysis of random matrices. ar Xiv preprint ar Xiv:1011.3027, 2010. Yu, Jia Yuan, Mannor, Shie, and Shimkin, Nahum. Markov decision processes with arbitrary reward processes. Mathematics of Operations Research, 34(3):737 757, 2009. Zhou, Kemin, Doyle, John Comstock, Glover, Keith, et al. Robust and optimal control, volume 40. Prentice hall New Jersey, 1996. Zinkevich, Martin. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 928 936, 2003.