# gradient_descent_learns_linear_dynamical_systems__8182dbff.pdf Journal of Machine Learning Research 19 (2018) 1-44 Submitted 9/16; Revised 7/18; Published 8/18 Gradient Descent Learns Linear Dynamical Systems Moritz Hardt m@mrtz.org Department of Electrical Engineering and Computer Science University of California, Berkeley Tengyu Ma tengyuma@stanford.edu Facebook AI Research Benjamin Recht brecht@berkeley.edu Department of Electrical Engineering and Computer Science University of California, Berkeley Editor: Sujay Sanghavi We prove that stochastic gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system. Even though the objective function is non-convex, we provide polynomial running time and sample complexity bounds under strong but natural assumptions. Linear systems identification has been studied for many decades, yet, to the best of our knowledge, these are the first polynomial guarantees for the problem we consider. Key-words: non-convex optimization, linear dynamical system, stochastic gradient descent, generalization bounds, time series, over-parameterization 1. Introduction Many learning problems are by their nature sequence problems where the goal is to fit a model that maps a sequence of input words x1, . . . , x T to a corresponding sequence of observations y1, . . . , y T . Text translation, speech recognition, time series prediction, video captioning and question answering systems, to name a few, are all sequence to sequence learning problems. For a sequence model to be both expressive and parsimonious in its parameterization, it is crucial to equip the model with memory thus allowing its prediction at time t to depend on previously seen inputs. Recurrent neural networks form an expressive class of non-linear sequence models. Through their many variants, such as long-short-term-memory Hochreiter and Schmidhuber (1997), recurrent neural networks have seen remarkable empirical success in a broad range of domains. At the core, neural networks are typically trained using some form of (stochastic) gradient descent. Even though the training objective is non-convex, it is widely observed in practice that gradient descent quickly approaches a good set of model parameters. Understanding the effectiveness of gradient descent for non-convex objectives on theoretical grounds is a major open problem in this area. c 2018 Moritz Hardt, Tengyu Ma, and Benjamin Recht. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v19/16-465.html. Hardt, Ma, and Recht If we remove all non-linear state transitions from a recurrent neural network, we are left with the state transition representation of a linear dynamical system. Notwithstanding, the natural training objective for linear systems remains non-convex due to the composition of multiple linear operators in the system. If there is any hope of eventually understanding recurrent neural networks, it will be inevitable to develop a solid understanding of this special case first. To be sure, linear dynamical systems are important in their own right and have been studied for many decades independently of machine learning within the control theory community. Control theory provides a rich set techniques for identifying and manipulating linear systems. The learning problem in this context corresponds to linear dynamical system identification . Maximum likelihood estimation with gradient descent is a popular heuristic for dynamical system identification Ljung (1998). In the context of machine learning, linear systems play an important role in numerous tasks. For example, their estimation arises as subroutines of reinforcement learning in robotics Levine and Koltun (2013), location and mapping estimation in robotic systems Durrant-Whyte and Bailey (2006), and estimation of pose from video Rahimi et al. (2005). In this work, we show that gradient descent efficiently minimizes the maximum likelihood objective of an unknown linear system given noisy observations generated by the system. More formally, we receive noisy observations generated by the following time-invariant linear system: ht+1 = Aht + Bxt (1.1) yt = Cht + Dxt + ξt Here, A, B, C, D are linear transformations with compatible dimensions and we denote by Θ = (A, B, C, D) the parameters of the system. The vector ht represents the hidden state of the model at time t. Its dimension n is called the order of the system. The stochastic noise variables {ξt} perturb the output of the system which is why the model is called an output error model in control theory. We assume the variables are drawn i.i.d. from a distribution with mean 0 and variance σ2. Throughout the paper we focus on controllable and externally stable systems. A linear system is externally stable (or equivalently bounded-input bounded-output stable) if and only if the spectral radius of A, denoted ρ(A), is strictly bounded by 1. Controllability is a mild non-degeneracy assumption that we formally define later. Without loss of generality, we further assume that the transformations B, C and D have bounded Frobenius norm. This can be achieved by a rescaling of the output variables. We assume we have N pairs of sequences (x, y) as training examples, S = n (x(1), y(1)), . . . , (x(N), y(N)) o . Each input sequence x RT of length T is drawn from a distribution and y is the corresponding output of the system above generated from an unknown initial state h. We allow the unknown initial state to vary from one input sequence to the next. This only makes the learning problem more challenging. Our goal is to fit a linear system to the observations. We parameterize our model in exactly the same way as (1.1). That is, for linear mappings ( ˆA, ˆB, ˆC, ˆD), the trained model Gradient Descent Learns Linear Dynamical Systems is defined as: ˆht+1 = ˆAˆht + ˆBxt , ˆyt = ˆCˆht + ˆDxt (1.2) The (population) risk of the model is obtained by feeding the learned system with the correct initial states and comparing its predictions with the ground truth in expectation over inputs and errors. Denoting by ˆyt the t-th prediction of the trained model starting from the correct initial state that generated yt, and using bΘ as a short hand for ( ˆA, ˆB, ˆC, ˆD), we formally define population risk as: f(bΘ) = E {xt},{ξt} t=1 ˆyt yt 2 # Note that even though the prediction ˆyt is generated from the correct initial state, the learning algorithm does not have access to the correct initial state for its training sequences. While the squared loss objective turns out to be non-convex, it has many appealing properties. Assuming the inputs xt and errors ξt are drawn independently from a Gaussian distribution, the corresponding population objective corresponds to maximum likelihood estimation. In this work, we make the weaker assumption that the inputs and errors are drawn independently from possibly different distributions. The independence assumption is certainly idealized for some learning applications. However, in control applications the inputs can often be chosen by the controller rather than by nature. Moreover, the outputs of the system at various time steps are correlated through the unknown hidden state and therefore not independent even if the inputs are. 1.1 Our results We show that we can efficiently minimize the population risk using projected stochastic gradient descent. The bulk of our work applies to single-input single-output (SISO) systems meaning that inputs and outputs are scalars xt, yt R. However, the hidden state can have arbitrary dimension n. Every controllable SISO admits a convenient canonical form called controllable canonical form that we formally introduce later in Section 1.7. In this canonical form, the transition matrix A is governed by n parameters a1, . . . , an which coincide with the coefficients of the characteristic polynomial of A. The minimal assumption under which we might hope to learn the system is that the spectral radius of A is smaller than 1. However, the set of such matrices is non-convex and does not have enough structure for our analysis.1 We will therefore make additional assumptions. The assumptions we need differ between the case where we are trying to learn A with n parameter system, and the case where we allow ourselves to over-specify the trained model with n > n parameters. The former is sometimes called proper learning, while the latter is called improper learning. In the improper case, we are essentially able to learn any system with spectral radius less than 1 under a mild separation condition on the roots of the characteristic polynomial. Our assumption in the proper case is stronger and we introduce it next. 1. In both the controllable canonical form and the standard parameterization of the matrix A, the set of matrices with spectral radius less than 1 is not convex. Hardt, Ma, and Recht 1.2 Proper learning Suppose the state transition matrix A has characteristic polynomial det(z I A) = zn + a1zn 1 + + an, which turns out to by and large decide the difficulty of the learning according to our analysis. (In fact, we will parameterize A in a way so that the coefficients of the characteristic polynomials are the parameters of learning problem. See Section 1.7 for the detailed setup.) Consider the corresponding polynomial q(z) = 1+a1z+a2z2+ +anzn over the complex numbers C. Complex plane Figure 1: An example of polynomial q that satisfies our assumption. The unit circle is the collection of the inputs of q and the other curve shows the corresponding outputs (with the corresponding colors.) We see the image of the polynomial stays in the wedge which contains all the complex number z satisfying ℜ(q(z)) > |ℑ(q(z))|. We will require the state transition matrix satisfy that the image of the unit circle on the complex plane under the polynomial q is contained in the cone of complex numbers whose real part is larger than their absolute imaginary part. Formally, for all z C such that |z| = 1, we require that ℜ(q(z)) > |ℑ(q(z))|. Here, ℜ(z) and ℑ(z) denote the real and imaginary part of z, respectively. We illustrate this condition in Figure 1 on the right for a degree 4 system. Our assumption has three important implications. First, it implies (via Rouch e s theorem) that the spectral radius of A is smaller than 1 and therefore ensures the stability of the system. Second, the vectors satisfying our assumption form a convex set in Rn. Finally, it ensures that the objective function is weakly quasi-convex, a condition we introduce later when we show that it enables stochastic gradient descent to make sufficient progress. We note in passing that our assumption can be satisfied via the ℓ1-norm constraint a 1 2/2. Moreover, if we pick random Gaussian coefficients with expected norm bounded by o(1/ log n), then the resulting vector will satisfy our assumption with probability 1 o(1). Roughly speaking, the assumption requires the roots of the characteristic polynomial p(z) = zn + a1zn 1 + + an are relatively dispersed inside the unit circle. (For comparison, on the other end of the spectrum, the polynomial p(z) = (z 0.99)n have all its roots colliding at the same point and doesn t satisfy the assumption.) Theorem 1.1 (Informal) Under our assumption, projected stochastic gradient descent, when given N sample sequence of length T, returns parameters bΘ with population risk E f(bΘ) f(Θ) + O Here the expectation on LHS is with respect to the randomness of the algorithm. Recall that f(Θ) is the population risk of the optimal system, and σ2 refers to the variance of the noise variables. We also assume that the inputs xt are drawn from a pairwise independent distribution with mean 0 and variance 1. Note, however, that this does not imply Gradient Descent Learns Linear Dynamical Systems independence of the outputs as these are correlated by a common hidden state. The stated version of our result glosses over the fact that we need our assumption to hold with a small amount of slack; a precise version follows in Section 4. Our theorem establishes a polynomial convergence rate for stochastic gradient descent. Since each iteration of the algorithm only requires a sequence of matrix operations and an efficient projection step, the running time is polynomial, as well. Likewise, the sample requirements are polynomial since each iteration requires only a single fresh example. An important feature of this result is that the error decreases with both the length T and the number of samples N. The dependency on the dimension n, on the other hand, is likely to be quite loose, and tighter bounds are left for future work. The algorithm requires a (polynomial-time) projection step to a convex set at every iteration (formally defined in Section 4 and Algorithm 1). Computationally, it can be a bottleneck, although it is unlikely to be required in practice and may be an artifact of our analysis. 1.3 The power of over-parameterization Endowing the model with additional parameters compared to the ground truth turns out to be surprisingly powerful. We show that we can essentially remove the assumption we previously made in proper learning. The idea is simple. If p is the characteristic polynomial of A of degree n. We can find a system of order n > n such that the characteristic polynomial of its transition matrix becomes p p for some polynomial p of order n n. This means that to apply our result we only need the polynomial p p to satisfy our assumption. At this point, we can choose p to be an approximation of the inverse p 1. For sufficiently good approximation, the resulting polynomial p p is close to 1 and therefore satisfies our assumption. Such an approximation exists generically for n = O(n) under mild non-degeneracy assumptions on the roots of p. In particular, any small random perturbation of the roots would suffice. Theorem 1.2 (Informal) Under a mild non-degeneracy assumption, stochastic gradient descent returns parameters bΘ corresponding to a system of order n = O(n) with population risk f(bΘ) f(Θ) + O when given N sample sequences of length T. We remark that the idea we sketched also shows that, in the extreme, improper learning of linear dynamical systems becomes easy in the sense that the problem can be solved using linear regression against the outputs of the system. However, our general result interpolates between the proper case and the regime where linear regression works. We discuss in more details in Section 6.3. 1.4 Multi-input multi-output systems Both results we saw immediately extend to single-input multi-output (SIMO) systems as the dimensionality of C and D are irrelevant for us. The case of multi-input multi-output Hardt, Ma, and Recht (MIMO) systems is more delicate. Specifically, our results carry over to a broad family of multi-input multi-output systems. However, in general MIMO systems no longer enjoy canonical forms like SISO systems. In Section 7, we introduce a natural generalization of controllable canonical form for MIMO systems and extend our results to this case. 1.5 Related work System identification is a core problem in dynamical systems and has been studied in depth for many years. The most popular reference on this topic is the text by Ljung Ljung (1998). Nonetheless, the list of non-asymptotic results on identifying linear systems from noisy data is surprisingly short. Several authors have recently tried to estimate the sample complexity of dynamical system identification using machine learning tools Vidyasagar and Karandikar (2008); Campi and Weyer (2002); Weyer and Campi (1999). All of these result are rather pessimistic with sample complexity bounds that are exponential in the degree of the linear system and other relevant quantities. Contrastingly, we prove that gradient descent has an associated polynomial sample complexity in all of these parameters. Moreover, all of these papers only focus on how well empirical risk approximates the true population risk and do not provide guarantees about any algorithmic schemes for minimizing the empirical risk. The only result to our knowledge which provides polynomial sample complexity for identifying linear dynamical systems is in Shah et al Shah et al. (2012). Here, the authors show that if certain frequency domain information about the linear dynamical system is observed, then the linear system can be identified by solving a second-order cone programming problem. This result is about improper learning only, and the size of the resulting system may be quite large, scaling as the (1 ρ(A)) 2. As we describe in this work, very simple algorithms work in the improper setting when the system degree is allowed to be polynomial in (1 ρ(A)) 1. Moreover, it is not immediately clear how to translate the frequency-domain results to the time-domain identification problem discussed above. Our main assumption about the image of the polynomial q(z) is an appeal to the theory of passive systems. A system is passive if the dot product between the input sequence ut and output sequence yt are strictly positive. Physically, this notion corresponds to systems that cannot create energy. For example, a circuit made solely of resistors, capacitors, and inductors would be a passive electrical system. If one added an amplifier to the internals of the system, then it would no longer be passive. The set of passive systems is a subset of the set of stable systems, and the subclass is somewhat easier to work with mathematically. Indeed, Megretski used tools from passive systems to provide a relaxation technique for a family of identification problems in dynamical systems Megretski (2008). His approach is to lower bound a nonlinear least-squares cost with a convex functional. However, he does not prove that his technique can identify any of the systems, even asymptotically. Stoica and S oderstr om S oderstr om and Stoica (1982); Stoica and S oderstr om (1982, 1984) and later Bazanella et al. Bazanella et al. (2008); Eckhard and Bazanella (2011) prove the quasi-convexity of a cost function under a passivity condition in the context of system identification, but no sample complexity or global convergence proofs are provided. Gradient Descent Learns Linear Dynamical Systems 1.6 Proof overview The first important step in our proof is to develop population risk in Fourier domain where it is closely related to what we call idealized risk. Idealized risk essentially captures the ℓ2-difference between the transfer function of the learned system and the ground truth. The transfer function is a fundamental object in control theory. Any linear system is completely characterized by its transfer function G(z) = C(z I A) 1B. In the case of a SISO, the transfer function is a rational function of degree n over the complex numbers and can be written as G(z) = s(z)/p(z). In the canonical form introduced in Section 1.7, the coefficients of p(z) are precisely the parameters that specify A. Moreover, znp(1/z) = 1+a1z +a2z2 + +anzn is the polynomial we encountered in the introduction. Under the assumption illustrated earlier, we show in Section 3 that the idealized risk is weakly quasiconvex (Lemma 3.3). Quasi-convexity implies that gradients cannot vanish except at the optimum of the objective function; we review this (mostly known) material in Section 2. In particular, this lemma implies that in principle we can hope to show that gradient descent converges to a global optimum. However, there are several important issues that we need to address. First, the result only applies to idealized risk, not our actual population risk objective. Therefore it is not clear how to obtain unbiased gradients of the idealized risk objective. Second, there is a subtlety in even defining a suitable empirical risk objective. The reason is that risk is defined with respect to the correct initial state of the system which we do not have access to during training. We overcome both of these problems. In particular, we design an almost unbiased estimator of the gradient of the idealized risk in Lemma 5.4 and give variance bounds of the gradient estimator (Lemma 5.5). Our results on improper learning in Section 6 rely on a surprisingly simple but powerful insight. We can extend the degree of the transfer function G(z) by extending both numerator and denominator with a polynomial u(z) such that G(z) = s(z)u(z)/p(z)u(z). While this results in an equivalent system in terms of input-output behavior, it can dramatically change the geometry of the optimization landscape. In particular, we can see that only p(z)u(z) has to satisfy the assumption of our proper learning algorithm. This allows us, for example, to put u(z) p(z) 1 so that p(z)u(z) 1, hence trivially satisfying our assumption. A suitable inverse approximation exists under light assumptions and requires degree no more than d = O(n). Algorithmically, there is almost no change. We simply run stochastic gradient descent with n + d model parameters rather than n model parameters. 1.7 Preliminaries For complex matrix (or vector, number) C, we use ℜ(C) to denote the real part and ℑ(C) the imaginary part, and C the conjugate and C = C its conjugate transpose . We use | | to denote the absolute value of a complex number c. For complex vector u and v, we use u, v = u v to denote the inner product and u = u u is the norm of u. For complex matrix A and B with same dimension, A, B = tr(A B) defines an inner product, and A F = p tr(A A) is the Frobenius norm. For a square matrix A, we use ρ(A) to denote the spectral radius of A, that is, the largest absolute value of the elements in the spectrum of A. We use In to denote the identity matrix with dimension n n, and we drop the subscript when it s clear from the context.We let ei denote the i-th standard basis vector. Hardt, Ma, and Recht A SISO of order n is in controllable canonical form if A and B have the following form 0 1 0 0 0 0 1 0 ... ... ... ... ... 0 0 0 1 an an 1 an 2 a1 0 0 ... 0 1 We will parameterize ˆA, ˆB, ˆC, ˆD accordingly. We will write A = CC(a) for brevity, where a is used to denote the unknown last row [ an, . . . , a1] of matrix A. We will use ˆa to denote the corresponding training variables for a. Since here B is known, so ˆB is no longer a trainable parameter, and is forced to be equal to B. Moreover, C is a row vector and we use [c1, , cn] to denote its coordinates (and similarly for ˆC). A SISO is controllable if and only if the matrix [B | AB | A2B | | An 1B] has rank n. This statement corresponds to the condition that all hidden states should be reachable from some initial condition and input trajectory. Any controllable system admits a controllable canonical form Heij et al. (2007). For vector a = [an, . . . , a1], let pa(z) denote the polynomial pa(z) = zn + a1zn 1 + + an . (1.5) When a defines the matrix A that appears in controllable canonical form, then pa is precisely the characteristic polynomial of A. That is, pa(z) = det(z I A). 2. Gradient descent and quasi-convexity It is known that under certain mild conditions (stochastic) gradient descent converges even on non-convex functions to local minimum Ge et al. (2015); Lee et al. (2016). Though usually for concrete problems the challenge is to prove that there is no spurious local minimum other than the target solution. Here we introduce a condition similar to the quasi-convexity notion in Hazan et al. (2015), which ensures that any point with vanishing gradient is the optimal solution . Roughly speaking, the condition says that at any point θ the negative of the gradient f(θ) should be positively correlated with direction θ θ pointing towards the optimum. Our condition is slightly weaker than that in Hazan et al. (2015) since we only require quasi-convexity and smoothness with respect to the optimum, and this (simple) extension will be necessary for our analysis. Definition 2.1 (Weak quasi-convexity) We say an objective function f is τ-weaklyquasi-convex (τ-WQC) over a domain B with respect to global minimum θ if there is a positive constant τ > 0 such that for all θ B, f(θ) (θ θ ) τ(f(θ) f(θ )) . (2.1) We further say f is Γ-weakly-smooth if for for any point θ, f(θ) 2 Γ(f(θ) f(θ )). Note that indeed any Γ-smooth function in the usual sense (that is, 2f Γ) is O(Γ)- weakly-smooth. For a random vector X Rn, we define it s variance to be Var[X] = E[ X EX 2]. Gradient Descent Learns Linear Dynamical Systems Definition 2.2 We call r(θ) an unbiased estimator of f(θ) with variance V if it satisfies E[r(θ)] = f(θ) and Var[r(θ)] V . Projected stochastic gradient descent over some closed convex set B with learning rate η > 0 refers to the following algorithm in which ΠB denotes the Euclidean projection onto B: for k = 0 to K 1 : wk+1 = θk ηr(θk) θk+1 = ΠB(wk+1) return θj with j uniformly picked from 1, . . . , K (2.2) The following Proposition is well known for convex objective functions (corresponding to 1-weakly-quasi-convex functions). We extend it (straightforwardly) to the case when τ-WQC holds with any positive constant τ. Proposition 2.3 Suppose the objective function f is τ-weakly-quasi-convex and Γ-weaklysmooth, and r( ) is an unbiased estimator for f(θ) with variance V . Moreover, suppose the global minimum θ belongs to B, and the initial point θ0 satisfies θ0 θ R. Then projected gradient descent (2.2) with a proper learning rate returns θK in K iterations with expected error E f(θK) f(θ ) O Remark 2.4 It s straightforward to see (from the proof) that the algorithm tolerates inverse exponential bias, namely bias on the order of exp( Ω(n)), in the gradient estimator. Technically, suppose E[r(θ)] = f(θ) ζ then f(θK) O max n ΓR2 τ 2K , R o +poly(K) ζ. Throughout the paper, we assume that the error that we are shooting for is inverse polynomial, namely 1/n C for some absolute constant C, and therefore the effect of inverse exponential bias is negligible. We defer the proof of Proposition 2.3 to Appendix A which is a simple variation of the standard convergence analysis of stochastic gradient descent (see, for example, Bottou (1998)). Finally, we note that the sum of two quasi-convex functions may no longer be quasi-convex. However, if a sequence functions is τ-WQC with respect to a common point θ , then their sum is also τ-WQC. This follows from the linearity of gradient operation. Proposition 2.5 Suppose functions f1, . . . , fn are individually τ-weakly-quasi-convex in B with respect to a common global minimum θ , then for non-negative w1, . . . , wn the linear combination f = Pn i=1 wifi is also τ-weakly-quasi-convex with respect to θ in B. 3. Population risk in frequency domain We next establish conditions under which risk is weakly-quasi-convex. Our strategy is to first approximate the risk functional f(bΘ) by what we call idealized risk. This approximation Hardt, Ma, and Recht of the objective function is fairly straightforward; we justify it toward the end of the section. We can show that f(bΘ) D ˆD 2 + P k=0 ˆC ˆAk B CAk B 2 . (3.1) The leading term D ˆD 2 is convex in ˆD which appears nowhere else in the objective. It therefore doesn t affect the convergence of the algorithm (up to lower order terms) by virtue of Proposition 2.5, and we restrict our attention to the remaining terms. Definition 3.1 (Idealized risk) We define the idealized risk as g( ˆA, ˆC) = ˆC ˆAk B CAk B 2 . (3.2) We now use basic concepts from control theory (see Heij et al. (2007); Hespanha (2009) for more background) to express the idealized risk (3.2) in Fourier domain. The transfer function of the linear system is G(z) = C(z I A) 1B . (3.3) Note that G(z) is a rational function over the complex numbers of degree n and hence we can find polynomials s(z) and p(z) such that G(z) = s(z) p(z) , with the convention that the leading coefficient of p(z) is 1. In controllable canonical form (1.4), the coefficients of p will correspond to the last row of the A, while the coefficients of s correspond to the entries of C. Also note that t=1 z t CAt 1B = t=1 z trt 1 The sequence r = (r0, r1, . . . , rt, . . .) = (CB, CAB, . . . , CAt B, . . .) is called the impulse response of the linear system. The behavior of a linear system is uniquely determined by the impulse response and therefore by G(z). Analogously, we denote the transfer function of the learned system by b G(z) = ˆC(z I ˆA) 1B = ˆs(z)/ˆp(z) . The idealized risk (3.2) is only a function of the impulse response ˆr of the learned system, and therefore it is only a function of b G(z). Recall that C = [c1, . . . , cn] is defined in Section 1.7. For future reference, we note that by some elementary calculation (see Lemma B.1), we have G(z) = C(z I A) 1B = c1 + + cnzn 1 zn + a1zn 1 + + an , (3.4) which implies that s(z) = c1 + + cnzn 1 and p(z) = zn + a1zn 1 + + an. With these definitions in mind, we are ready to express idealized risk in Fourier domain. Proposition 3.2 Suppose pˆa(z) has all its roots inside unit circle, then the idealized risk g(ˆa, ˆC) can be written in the Fourier domain as g( ˆA, ˆC) = Z 2π b G(eiθ) G(eiθ) 2 dθ . Gradient Descent Learns Linear Dynamical Systems Proof Note that G(eiθ) is the Fourier transform of the sequence {rk} and so is b G(eiθ) the Fourier transform2 of ˆrk. Therefore by Parseval Theorem, we have that g( ˆA, ˆC) = k=0 ˆrk rk 2 = Z 2π 0 | b G(eiθ) G(eiθ)|2dθ . (3.5) 3.1 Quasi-convexity of the idealized risk Now that we have a convenient expression for risk in Fourier domain, we can prove that the idealized risk g( ˆA, ˆC) is weakly-quasi-convex when ˆa is not so far from the true a in the sense that pa(z) and ˆpa(z) have an angle less than π/2 for every z on the unit circle. We will use the convention that a and ˆa refer to the parameters that specify A and ˆA, respectively. Lemma 3.3 For τ > 0 and every ˆC, the idealized risk g( ˆA, ˆC) is τ-weakly-quasi-convex over the domain Nτ(a) = ˆa Rn : ℜ pa(z) τ/2, z C, s.t. |z| = 1 . (3.6) Proof We first analyze a single term h = | b G(z) G(z)|2. Recall that b G(z) = ˆs(z)/ˆp(z) where ˆp(z) = pˆa(z) = zn + ˆa1zn 1 + + ˆan. Note that z is fixed and h is a function of ˆa and ˆC. Then it is straightforward to see that h ˆs(z) = 2ℜ 1 and h ˆp(z) = 2ℜ ˆs(z) Since ˆs(z) and ˆp(z) are linear in ˆC and ˆa respectively, by chain rule we have that ˆa, ˆa a + h ˆC , ˆC C = h ˆp(z) ˆp(z) ˆa , ˆa a + h ˆs(z) ˆs(z) = h ˆp(z)(ˆp(z) p(z)) + h ˆs(z)(ˆs(z) s(z)) . 2. The Fourier transform exists since rk 2 = ˆC ˆAk ˆB 2 ˆC ˆAk ˆB cρ( ˆA)k where c doesn t depend on k and ρ( ˆA) < 1. Hardt, Ma, and Recht Plugging the formulas (3.7) and (3.8) for h ˆs(z) and h ˆp(z) into the equation above, we obtain that ˆa, ˆa a + h ˆC , ˆC C = 2ℜ ˆs(z)(ˆp(z) p(z)) + ˆp(z)(ˆs(z) s(z)) = 2ℜ ˆs(z)p(z) s(z)ˆp(z) ˆs(z) ˆp(z) s(z) b G(z) G(z) 2 . Hence h = | b G(z) G(z)|2 is τ-weakly-quasi-convex with τ = 2 min|z|=1 ℜ n p(z) ˆp(z) o . This implies our claim, since by Proposition 3.2, the idealized risk g is convex combination of functions of the form | b G(z) G(z)|2 for |z| = 1. Moreover, Proposition 2.5 shows that convex combination preserves weak quasi-convexity. For future reference, we also prove that the idealized risk is O(n2/τ 4 1 )-weakly smooth. Lemma 3.4 The idealized risk g( ˆA, ˆC) is Γ-weakly smooth with Γ = O(n2/τ 4 1 ). Proof By equation (3.8) and the chain rule we get that | b G(z) G(z)|2 [1, . . . , zn 1]dz . therefore we can bound the norm of the gradient by ˆs(z) ˆp(z) s(z) T 4 [1, . . . , zn 1] 2 | 1 p(z)|2dz O(n/τ 2 1 ) g( ˆA, ˆC) . Similarly, we could show that g ˆa 2 O(n2/τ 2 1 ) g( ˆA, ˆC). 3.2 Justifying idealized risk We need to justify the approximation we made in Equation (3.1). Lemma 3.5 Assume that ξt and xt are drawn i.i.d. from an arbitrary distribution with mean 0 and variance 1. Then the population risk f(bΘ) can be written as, f(bΘ) = ( ˆD D)2 + ˆC ˆAk 1B CAk 1B 2 + σ2 . (3.9) Gradient Descent Learns Linear Dynamical Systems The idealized risk is upper bound of the population risk f(bΘ) according to equation (3.1) and (3.9). We don t have to quantify the gap between them because later in Algorithm 1, we will directly optimize the idealized risk by constructing an estimator of its gradient, and thus the optimization will guarantee a bounded idealized risk which translates to a bounded population risk. See Section 5 for details. Proof [Proof of Lemma 3.5] Under the distributional assumptions on ξt and xt, we can calculate the objective functions above analytically. We write out yt, ˆyt in terms of the inputs, k=1 CAt k 1Bxk + CAt 1h0 + ξt , ˆyt = ˆDxt + k=1 ˆC ˆAt k 1 ˆBxk + CAt 1h0 . Therefore, using the fact that xt s are independent and with mean 0 and covariance I, the expectation of the error can be calculated (formally by Claim B.2), E ˆyt yt 2 = ˆD D 2 F + Pt 1 k=1 ˆC ˆAt k 1 ˆB CAt k 1B 2 F + E[ ξt 2] . (3.10) Using E[ ξt 2] = σ2 , it follows that f(bΘ) = ˆD D 2 F + PT 1 k=1 1 k T ˆC ˆAk 1 ˆB CAk 1B 2 F + σ2 . (3.11) Recall that under the controllable canonical form (1.4), B = en is known and therefore ˆB = B is no longer a variable. Then the expected objective function (3.11) simplifies to f(bΘ) = ( ˆD D)2 + PT 1 k=1 1 k T ˆC ˆAk 1B CAk 1B 2 + σ2 . The previous lemma does not yet control higher order contributions present in the idealized risk. This requires additional structure that we introduce in the next section. 4. Effective relaxations of spectral radius The previous section showed quasi-convexity of the idealized risk. However, several steps are missing towards showing finite sample guarantees for stochastic gradient descent. In particular, we will need to control the variance of the stochastic gradient at any system that we encounter in the training. For this purpose we formally introduce our main assumption now and show that it serves as an effective relaxation of spectral radius. This results below will be used for proving convergence of stochastic gradient descent in Section 5. Consider the following convex region C in the complex plane, C = {z : ℜz (1 + τ0)|ℑz|} {z : τ1 < ℜz < τ2} . (4.1) where τ0, τ1, τ2 > 0 are constants that are considered as fixed constant throughout the paper. Our bounds will have polynomial dependency on these parameters. Pictorially, this convex set is pretty much the dark area in Figure 1 (with the corner chopped). This set in C induces a convex set in the parameter space which is a subset of the transition matrix with spectral radius less than α. Hardt, Ma, and Recht Definition 4.1 We say a polynomial p(z) is α-acquiescent if {p(z)/zn : |z| = α} C. A linear system with transfer function G(z) = s(z)/p(z) is α-acquiescent if the denominator p(z) is. The set of coefficients a Rn defining acquiescent systems form a convex set. Formally, for a positive α > 0, define the convex set Bα Rn as Bα = a Rn : {pa(z)/zn : |z| = α} C . (4.2) We note that definition (4.2) is equivalent to the definition Bα = a Rn : {znp(1/z): |z| = 1/α} C , which is the version that we used in introduction for simplicity. Indeed, we can verify the convexity of Bα by definition and the convexity of C: a, b Bα implies that pa(z)/zn, pb(z)/zn C and therefore, p(a+b)/2(z)/zn = 1 2(pa(z)/zn + pb(z)/zn) C. We also note that the parameter α in the definition of acquiescence corresponds to the spectral radius of the companion matrix. In particular, an acquiescent system is stable for α < 1. Lemma 4.2 Suppose a Bα, then the roots of polynomial pa(z) have magnitudes bounded by α. Therefore the controllable canonical form A = CC(a) defined by a has spectral radius ρ(A) < α. Proof Define holomorphic function f(z) = zn and g(z) = pa(z) = zn + a1zn 1 + + an. We apply the symmetric form of Rouche s theorem Estermann (1962) on the circle K = {z : |z| = α}. For any point z on K, we have that |f(z)| = αn, and that |f(z) g(z)| = αn |1 pa(z)/zn|. Since a Bα, we have that pa(z)/zn C for any z with |z| = α. Observe that for any c C we have that |1 c| < 1 + |c|, therefore we have that |f(z) g(z)| = αn|1 pa(z)/zn| < αn(1 + |pa(z)|/|zn|) = |f(z)| + |pa(z)| = |f(z)| + |g(z)| . Hence, using Rouche s Theorem, we conclude that f and g have same number of roots inside circle K. Note that function f = zn has exactly n roots in K and therefore g have all its n roots inside circle K. The following lemma establishes the fact that Bα is a monotone family of sets in α. The proof follows from the maximum modulo principle of the harmonic functions ℜ(znp(1/z)) and ℑ(znp(1/z)). We defer the short proof to Section C.1. We remark that there are larger convex sets than Bα that ensure bounded spectral radius. However, in order to also guarantee monotonicity and the no blow-up property below, we have to restrict our attention to Bα. Lemma 4.3 (Monotonicity of Bα) For any 0 < α < β, we have that Bα Bβ. Our next lemma entails that acquiescent systems have well behaved impulse responses. Lemma 4.4 (No blow-up property) Suppose a Bα for some α 1. Then the companion matrix A = CC(a) satisfies k=0 α k Ak B 2 2πnα 2n/τ 2 1 . (4.3) Gradient Descent Learns Linear Dynamical Systems Moreover, it holds that for any k 0, Ak B 2 min{2πn/τ 2 1 , 2πnα2k 2n/τ 2 1 } . Proof [Proof of Lemma 4.4] Let fλ = P k=0 eiλkα k Ak B be the Fourier transform of the series α k Ak B. Then using Parseval s Theorem, we have k=0 α k Ak B 2 = Z 2π 0 |fλ|2dλ = Z 2π 0 |(I α 1eiλA) 1B|2dλ |pa(αe iλ)|2 dλ Z 2π n |pa(αe iλ)|2 dλ. (4.4) where at the last step we used the fact that (I w A) 1B = 1 pa(w 1)[w 1, w 2 . . . , z n] (see Lemma B.1), and that α 1. Since a Bα, we have that |qa(α 1eiλ)| τ1, and therefore pa(αe iλ) = αne inλq(eiλ/α) has magnitude at least τ1αn. Plugging in this into equation (4.4), we conclude that k=0 α k Ak B 2 Z 2π n |pa(αe iλ)|2 dλ 2πnα 2n/τ 2 1 . Finally we establish the bound for Ak B 2. By Lemma 4.3, we have Bα B1 for α 1. Therefore we can pick α = 1 in equation (4.3) and it still holds. That is, we have that k=0 Ak B 2 2πn/τ 2 1 . This also implies that Ak B 2 2πn/τ 2 1 . 4.1 Efficiently computing the projection In our algorithm, we require a projection onto Bα. However, the only requirement of the projection step is that it projects onto a set contained inside Bα that also contains the true linear system. So a variety of subroutines can be used to compute this projection or an approximation. First, the explicit projection onto Bα is representable by a semidefinite program. This is because each of the three constrains can be checked by testing if a trigonometric polynomial is non-negative. A simple inner approximation can be constructed by requiring the constraints to hold on an a finite grid of size O(n). One can check that this provides a tight, polyhedral approximation to the set Bα, following an argument similar to Appendix C of Bhaskar et al Bhaskar et al. (2013). Projection to this polyhedral takes at most O(n3.5) time by linear programming and potentially can be made faster by using fast Fourier transform. See Section F for more detailed discussion on why projection on a polytope suffices. Furthermore, sometimes we can replace the constraint by an ℓ1 or ℓ2constraint if we know that the system satisfies the corresponding assumption. Removing the projection step entirely is an interesting open problem. Hardt, Ma, and Recht 5. Learning acquiescent systems Next we show that we can learn acquiescent systems. Theorem 5.1 Suppose the true system Θ is α-acquiescent and satisfies C 1. Then with N samples of length T Ω(n + 1/(1 α)), stochastic gradient descent (Algorithm 1) with projection set Bα returns parameters bΘ = ( ˆA, ˆB, ˆC, ˆD) with population risk E f(bΘ) f(Θ) + O where O( )-notation hides polynomial dependencies on 1/(1 α), 1/τ0, 1/τ1, τ2, and R = a . The expectation is taken over the randomness of the algorithms and the examples. Algorithm 1 Projected stochastic gradient descent with partial loss For i = 0 to N: 1. Take a fresh sample ((x1, . . . , x T ), (y1, . . . , y T )). Let yt be the simulated outputs3 of system bΘ on inputs x and initial states h0 = 0. 2. Let T1 = T/4. Run stochastic gradient descent4 on loss function ℓ((x, y), bΘ) = 1 T T1 P t>T1 yt yt 2. Concretely, let GA = ℓ ˆa, GC = ℓ ˆC , and , GD = ℓ ˆD, we update [ˆa, ˆC, ˆD] [ˆa, ˆC, ˆD] η[GA, GC, GD] . 3. Project bΘ = (ˆa, ˆC, ˆD) to the set Bα Rn R. Recall that T is the length of the sequence and N is the number of samples. The first term in the bound (5.1) comes from the smoothness of the population risk and the second comes from the variance of the gradient estimator of population risk (which will be described in detail below). An important (but not surprising) feature here is the variance scale in 1/T and therefore for long sequence actually we got 1/N convergence instead of 1/ N (for relatively small N). Computational complexity: Step 2 in each iteration of the algorithm takes O(Tn) arithmetic operations, and the projection step takes O(n3.5) time to solve an linear programming problem. The project step is unlikely to be required in practice and may be an artifact of our analysis. We can further balance the variance of the estimator with the number of samples by breaking each long sequence of length T into Θ(T/n) short sequences of length Θ(n), and then run back-propagation (1) on these TN/n shorter sequences. This leads us to the following bound which gives the right dependency in T and N as we expected: TN should be counted as the true number of samples for the sequence-to-sequence model. 4. Note that yt is different from ˆyt defined in equation (1.2) which is used to define the population risk: here ˆyt is obtained from the (wrong) initial state h0 = 0 while ˆyt is obtained from the correct initial state. 4. See Algorithm Box 3 for a detailed back-propagation algorithm that computes the gradient. Gradient Descent Learns Linear Dynamical Systems Corollary 5.2 Under the assumption of Theorem 5.1, Algorithm 2 returns parameters bΘ with population risk E f(bΘ) f(Θ) + O where O( )-notation hides polynomial dependencies on 1/(1 α), 1/τ0, 1/τ1, τ2, and R = a . Algorithm 2 Projected stochastic gradient descent for long sequences Input: N samples sequences of length T Output: Learned system bΘ 1. Divide each sample of length T into T/(βn) samples of length βn where β is a large enough constant. Then run algorithm 1 with the new samples and obtain bΘ. We remark the the gradient computation procedure takes time linear in Tn since one can use chain-rule (also called back-propagation) to compute the gradient efficiently . For completeness, Algorithm 3 gives a detailed implementation. Finally and importantly, we remark that although we defined the population risk as the expected error with respected to sequence of length T, actually our error bound generalizes to any longer (or shorter) sequences of length T max{n, 1/(1 α)}. By the explicit formula for f(bΘ) (Lemma 3.5) and the fact that CAk B decays exponentially for k n (Lemma 4.4), we can bound the population risk on sequences of different lengths. Concretely, let f T (bΘ) denote the population risk on sequence of length T , we have for all T max{n, 1/(1 α)}, f T (bΘ) 1.1f(bΘ) + exp( (1 α) min{T, T }) O We note that generalization to longer sequence does deserve attention. Indeed in practice, it s usually difficult to train non-linear recurrent networks that generalize to longer sequences than the training data. We could hope to achieve linear convergence by showing that the empirical risk also satisfies the weakly-quasi-convexity. Then, we can re-use the samples and hope to use strong optimization tools (such as SVRG) to achieve the linear convergence. This is beyond the scope of this paper and left to future work. Our proof of Theorem 5.1 simply consists of three parts: a) showing the idealized risk is weakly quasi-convex in the convex set Bα (Lemma 5.3); b) designing an (almost) unbiased estimator of the gradient of the idealized risk (Lemma 5.4); c) variance bounds of the gradient estimator (Lemma 5.5). First of all, using the theory developed in Section 3 (Lemma 3.3 and Lemma 3.4), it is straightforward to verify that in the convex set Bα Rn, the idealized risk is both weaklyquasi-convex and weakly-smooth. Lemma 5.3 Under the condition of Theorem 5.1, the idealized risk (3.2) is τ-weakly-quasiconvex in the convex set Bα Rn and Γ-weakly smooth, where τ = Ω(τ0τ1/τ2) and Γ = O(n2/τ 4 1 ). Hardt, Ma, and Recht Proof [Proof of Lemma 5.3] It suffices to show that for all ˆa, a Bα, it satisfies ˆa Nτ(a) for τ = Ω(τ0τ1/τ2). Indeed, by the monotonicity of the family of sets Bα (Lemma 4.3), we have that ˆa, a B1, which by definition means for every z on unit circle, pa(z)/zn, pˆa(z)/zn C. By definition of C, for any point w, ˆw C, the angle φ between w and ˆw is at most π Ω(τ0) and ratio of the magnitude is at least τ1/τ2, which implies that ℜ(w/ ˆw) = |w|/| ˆw| cos(φ) Ω(τ0τ1/τ2). Therefore ℜ(pa(z)/pˆa(z)) Ω(τ0τ1/τ2), and we conclude that ˆa Nτ(a). The smoothness bound was established in Lemma 3.4. Towards designing an unbiased estimator of the gradient, we note that there is a small caveat here that prevents us to just use the gradient of the empirical risk, as commonly done for other (static) problems. Recall that the population risk is defined as the expected risk with known initial state h0, while in the training we don t have access to the initial states and therefore using the naive approach we couldn t even estimate population risk from samples without knowing the initial states. We argue that being able to handle the missing initial states is indeed desired: in most of the interesting applications h0 is unknown (or even to be learned). Moreover, the ability of handling unknown h0 allows us to break a long sequence into shorter sequences, which helps us to obtain Corollary 5.2. Here the difficulty is essentially that we have a supervised learning problem with missing data h0. We get around it by simply ignoring first T1 = Ω(T) outputs of the system and setting the corresponding errors to 0. Since the influence of h0 to any outputs later than time k T1 max{n, 1/(1 α)} is inverse exponentially small, we could safely assume h0 = 0 when the error earlier than time T1 is not taken into account. This small trick also makes our algorithm suitable to the cases when these early outputs are actually not observed. This is indeed an interesting setting, since in many sequenceto-sequence model Sutskever et al. (2014), there is no output in the first half fraction of iterations (of course these models have non-linear operation that we cannot handle). The proof of the correctness of the estimator is almost trivial and deferred to Section C. Lemma 5.4 Under the assumption of Theorem 5.1, suppose ˆa, a Bα. Then in Algorithm 1, at each iteration, GA, GC are unbiased estimators of the gradient of the idealized risk (3.2) in the sense that: E [GA, GC] = g exp( Ω((1 α)T)) . Finally, we control the variance of the gradient estimator. Lemma 5.5 The (almost) unbiased estimator (GA, GC) of the gradient of g( ˆA, ˆC) has variance bounded by Var [GA] + Var [GC] O n3Λ2/τ 6 1 + σ2n2Λ/τ 4 1 where Λ = O(max{n, 1/(1 α) log 1/(1 α)}). Gradient Descent Learns Linear Dynamical Systems Note that Lemma 5.5 does not directly follow from the Γ-weakly-smoothness of the population risk, since it s not clear whether the loss function ℓ((x, y), bΘ) is also Γ-smooth for every sample. Moreover, even if it could work out, from smoothness the variance bound can be only as small as Γ2, while the true variance scales linearly in 1/T. Here the discrepancy comes from that smoothness implies an upper bound of the expected squared norm of the gradient, which is equal to the variance plus the expected squared mean. Though typically for many other problems variance is on the same order as the squared mean, here for our sequence-to-sequence model, actually the variance decreases in length of the data, and therefore the bound of variance from smoothness is pessimistic. We bound directly the variance instead. It s tedious but simple in spirit. We mainly need Lemma 4.4 to control various difference sums that shows up from calculating the expectation. The only tricky part is to obtain the 1/T dependency which corresponds to the cancellation of the contribution from the cross terms. In the proof we will basically write out the variance as a (complicated) function of ˆA, ˆC which consists of sums of terms involving ( ˆC ˆAk B CAk B) and ˆAk B. We control these sums using Lemma 4.4. The proof is deferred to Section C. Finally we are ready to prove Theorem 5.1. We essentially just combine Lemma 5.3, Lemma 5.4 and Lemma 5.5 with the generic convergence Proposition 2.3. This will give us low error in idealized risk and then we relate the idealized risk to the population risk. Proof [Proof of Theorem 5.1] We consider g ( ˆA, ˆC, ˆD) = ( ˆD D)2 + g( ˆA, ˆC), an extended version of the idealized risk which takes the contribution of ˆD into account. By Lemma 5.4 we have that Algorithm 1 computes GA, GC which are almost unbiased estimators of the gradients of g up to negligible error exp( Ω((1 α)T)), and by Lemma C.2 we have GD is an unbiased estimator of g with respect to ˆD. Moreover by Lemma 5.5, these unbiased estimator has total variance V = O(n5+σ2n3) T where O( ) hides dependency on τ1 and (1 α). Applying Proposition 2.3 (which only requires an unbiased estimator of the gradient of g ), we obtain that after T iterations, we converge to a point with g (ˆa, ˆC, ˆD) O n2 . Then, by Lemma 3.5 we have f(bΘ) g (ˆa, ˆC, ˆD)+σ2 = g (ˆa, ˆC, ˆD) + f(Θ) O n2 + f(Θ) which completes the proof. 6. The power of improper learning We observe an interesting and important fact about the theory in Section 5: it solely requires a condition on the characteristic function p(z). This suggests that the geometry of the training objective function depends mostly on the denominator of the transfer function, even though the system is uniquely determined by the transfer function G(z) = s(z)/p(z). This might seem to be an undesirable discrepancy between the behavior of the system and our analysis of the optimization problem. However, we can actually exploit this discrepancy to design improper learning algorithms that succeed under much weaker assumptions. We rely on the following simple observation about the invariance of a system G(z) = s(z) p(z). For an arbitrary polynomial u(z) of leading Hardt, Ma, and Recht coefficient 1, we can write G(z) as G(z) = s(z)u(z) p(z)u(z) = s(z) where s = su and p = pu. Therefore the system s(z)/ p(z) has identical behavior as G. Although this is a redundant representation of G(z), it should counted as an acceptable solution. After all, learning the minimum representation5 of linear system is impossible in general. In fact, we will encounter an example in Section 6.1. While not changing the behavior of the system, the extension from p(z) to p(z), does affect the geometry of the optimization problem. In particular, if p(z) is now an α-acquiescent characteristic polynomial as defined in Definition 4.1, then we could find it simply using stochastic gradient descent as shown in Section 5. Observe that we don t require knowledge of u(z) but only its existence. Denoting by d the degree of u, the algorithm itself is simply stochastic gradient descent with n + d model parameters instead of n. Our discussion motivates the following definition. Definition 6.1 A polynomial p(z) of degree n is α-acquiescent by extension of degree d if there exists a polynomial u(z) of degree d and leading coefficient 1 such that p(z)u(z) is α-acquiescent. For a transfer function G(z), we define it s H2 norm as 0 |G(eiθ)|2dθ . We assume (with loss of generality) that the true transfer function G(z) has bounded H2 norm, that is, G H2 1. This can be achieve by a rescaling6 of the matrix C. Theorem 6.2 Suppose the true system has transfer function G(z) = s(z)/p(z) with a characteristic function p(z) that is α-acquiescent by extension of degree d, and G H2 1, then projected stochastic gradient descent with m = n + d states (that is, Algorithm 2 with m states) returns a system bΘ with population risk where the O( ) notation hides polynomial dependencies on τ0, τ1, τ2, 1/(1 α). The theorem follows directly from Corollary 5.2 (with some additional care about the scaling. Proof [Proof of Theorem 6.2] Let p(z) = p(z)u(z) be the acquiescent extension of p(z). Since τ2 |u(z)p(z)| = | p(z)| τ0 on the unit circle, we have that | s(z)| = |s(z)||u(z)| = 5. The minimum representation of a transfer function G(z) is defined as the representation G(z) = s(z)/p(z) with p(z) having minimum degree. 6. In fact, this is a natural scaling that makes comparing error easier. Recall that the population risk is essentially ˆG G H2, therefore rescaling C so that G H2 = 1 implies that when error 1 we achieve non-trivial performance. Gradient Descent Learns Linear Dynamical Systems s(z) Oτ(1/p(z)). Therefore we have that s(z) satisfies that s H2 = Oτ( s(z)/p(z) H2) = Oτ( G(z) H2) Oτ(1). That means that the vector C that determines the coefficients of s satisfies that C Oτ(1), since for a polynomial h(z) = b0 + + bn 1zn 1, we have h H2 = b . Therefore we can apply Corollary 5.2 to complete the proof. In the rest of this section, we discuss in subsection 6.1 the instability of the minimum representation in subsection, and in subsection 6.2 we show several examples where the characteristic function p(z) is not α-acquiescent but is α-acquiescent by extension with small degree d. As a final remark, the examples illustrated in the following sub-sections may be far from optimally analyzed. It is beyond the scope of this paper to understand the optimal condition under which p(z) is acquiescent by extension. 6.1 Instability of the minimum representation We begin by constructing a contrived example where the minimum representation of G(z) is not stable at all and as a consequence one can t hope to recover the minimum representation of G(z). Consider G(z) = s(z) p(z) := zn 0.8 n (z 0.1)(zn 0.9 n) and G (z) = s (z) p (z) := 1 z 0.1. Clearly these are the minimum representations of the G(z) and G (z), which also both satisfy acquiescence. On the one hand, the characteristic polynomial p(z) and p (z) are very different. On the other hand, the transfer functions G(z) and G (z) have almost the same values on unit circle up to exponentially small error, |G(z) G (z)| 0.8 n 0.9 n (z 0.1)(z 0.9 n) exp( Ω(n)) . Moreover, the transfer functions G(z) and ˆG(z) are on the order of Θ(1) on unit circle. These suggest that from an (inverse polynomially accurate) approximation of the transfer function G(z), we cannot hope to recover the minimum representation in any sense, even if the minimum representation satisfies acquiescence. 6.2 Power of improper learning in various cases We illustrate the use of improper learning through various examples below. 6.2.1 Example: artificial construction We consider a simple contrived example where improper learning can help us learn the transfer function dramatically. We will show an example of characteristic function which is not 1-acquiescent but (α + 1)/2-(α + 1)/2-acquiescent by extension of degree 3. Let n be a large enough integer and α be a constant. Let J = {1, n 1, n} and ω = e2πi/n, and then define p(z) = z3 Q j [n],j / J(z αωj). Therefore we have that p(z)/zn = z3 Y j [n],j J (1 αωj/z) = 1 αn/zn (1 ω/z)(1 ω 1/z)(1 1/z) (6.1) Hardt, Ma, and Recht Taking z = e iπ/2 we have that p(z)/zn has argument (phase) roughly 3π/4, and therefore it s not in C, which implies that p(z) is not 1-acquiescent. On the other hand, picking u(z) = (z ω)(z 1)(z ω 1) as the helper function, from equation (6.1) we have p(z)u(z)/zn+3 = 1 αn/zn takes values inverse exponentially close to 1 on the circle with radius (α + 1)/2. Therefore p(z)u(z) is (α + 1)/2-acquiescent. 6.2.2 Example: characteristic function with separated roots A characteristic polynomial with well separated roots will be acquiescent by extension. Our bound will depend on the following quantity of p that characterizes the separateness of the roots. Definition 6.3 For a polynomial h(z) of degree n with roots λ1, . . . , λn inside unit circle, define the quantity Γ( ) of the polynomial h as: λn j Q i =j(λi λj) Lemma 6.4 Suppose p(z) is a polynomial of degree n with distinct roots inside circle with radius α. Let Γ = Γ(p), then p(z) is α-acquiescent by extension of degree d = O(max{(1 α) 1 log( nΓ p H2), 0}). Our main idea to extend p(z) by multiplying some polynomial u that approximates p 1 (in a relatively weak sense) and therefore pu will always take values in the set C. We believe the following lemma should be known though for completeness we provide the proof in Section D. Lemma 6.5 (Approximation of inverse of a polynomial) Suppose p(z) is a polynomial of degree n and leading coefficient 1 with distinct roots inside circle with radius α, and Γ = Γ(p). Then for d = O(max{( 1 1 α log Γ (1 α)ζ , 0}), there exists a polynomial h(z) of degree d and leading coefficient 1 such that for all z on unit circle, zn+d p(z) h(z) ζ . Proof [Proof of Lemma 6.4] Let γ = 1 α. Using Lemma 6.5 with ζ = 0.5 p 1 H , we have that there exists polynomial u of degree d = O(max{ 1 1 α log(Γ p H ), 0}) such that zn+d p(z) u(z) ζ . Then we have that p(z)u(z)/zn+d 1 ζ|p(z)| < 0.5 . Therefore p(z)u(z)/zn+d Cτ0,τ1,τ2 for constant τ0, τ1, τ2. Finally noting that for degree n polynomial we have h H n h H2, which completes the proof. Gradient Descent Learns Linear Dynamical Systems 6.2.3 Example: Characteristic polynomial with random roots We consider the following generative model for characteristic polynomial of degree 2n. We generate n complex numbers λ1, . . . , λn uniformly randomly on circle with radius α < 1, and take λi, λi for i = 1, . . . , n as the roots of p(z). That is, p(z) = (z λ1)(z λ1) . . . (z λn)(z λn). We show that with good probability (over the randomness of λi s), polynomial p(z) will satisfy the condition in subsection 6.2.2 so that it can be learned efficiently by our improper learning algorithm. Theorem 6.6 Suppose p(z) with random roots inside circle of radius α is generated from the process described above. Then with high probability over the choice of p, we have that Γ(p) exp( e O( n)) and p H2 exp( O( n)). As a corollary, p(z) is α-acquiescent by extension of degree e O((1 α) 1n). Towards proving Theorem 6.6, we need the following lemma about the expected distance of two random points with radius ρ and r in log-space. Lemma 6.7 Let x C be a fixed point with |x| = ρ, and λ uniformly drawn on the circle with radius r. Then E [ln |x λ|] = ln max{ρ, r} . Proof When r = ρ, let N be an integer and ω = e2iπ/N. Then we have that E[ln |x λ| | r] = lim N 1 N k=1 ln |x rωk| (6.2) The right hand of equation (6.2) can be computed easily by observing that 1 N PN k=1 ln |x N ln QN k=1(x rωk) = 1 N ln |x N r N|. Therefore, when ρ > r, we have lim N 1 N PN k=1 ln |x rωk| = lim N ρ + 1 N ln |(x/ρ)N (r/ρ)N| = ln ρ. On the other hand, when ρ < r, we have that lim N 1 N PN k=1 ln |x rωk| = ln r. Therefore we have that E[ln |x λ| | r] = ln(max ρ, r). For ρ = r, similarly proof (with more careful concern of regularity condition) we can show that E[ln |x λ| | r] = ln r. Now we are ready to prove Theorem 6.6. Proof [Proof of Theorem 6.6] Fixing index i, and the choice of λi, we consider the random variable Yi = ln( |λi|2n Q j =i |λi λj| Q j =i |λi λj|)n ln |λi| P j =i ln |λi λj|. By Lemma 6.7, we have that E[Yi] = n ln |λi| P j =i E[ln |λi λj|] = ln(1 δ). Let Zj = ln |λi λj|. Then we have that Zj are random variable with mean 0 and ψ1-Orlicz norm bounded by 1 since E[eln |λi λj| 1] 1. Therefore by Bernstein inequality for sub-exponential tail random variable (for example, (Ledoux and Talagrand, 2013, Theorem 6.21)), we have that with high probability (1 n 10), it holds that P j =i Zj e O( n) where e O hides logarithmic factors. Therefore, with high probability, we have |Yi| e O( n). Finally we take union bound over all i [n], and obtain that with high probability, for i [n], |Yi| e O( n), which implies that Pn i=1 exp(Yi) exp( e O( n)). With similar technique, we can prove that p H2 exp( O( n). Hardt, Ma, and Recht 6.2.4 Example: Passive systems We will show that with improper learning we can learn almost all passive systems, an important class of stable linear dynamical system as we discussed earlier. We start offwith the definition of a strict-input passive system. Definition 6.8 (Passive System, c.f Kottenstette and Antsaklis (2010)) A SISO linear system is strict-input passive if and only if for some τ0 > 0 and any z on unit circle, ℜ(G(z)) τ0 . In order to learn the passive system, we need to add assumptions in the definition of strict passivity. To make it precise, we define the following subsets of complex plane: For positive constant τ0, τ1, τ2, define C+ τ0,τ1,τ2 = {z C : |z| τ2, ℜ(z) τ1, ℜ(z) τ0|ℑ(z)| } . (6.3) We say a transfer function G(z) = s(z)/p(z) is (τ0, τ1, τ2)-strict input passive if for any z on unit circle we have G(z) C+ τ0,τ1,τ2. Note that for small constant τ0, τ1 and large constant τ2, this basically means the system is strict-input passive. Now we are ready to state our main theorem in this subsection. We will prove that passive systems could be learned improperly with a constant factor more states (dimensions), assuming s(z) has all its roots strictly inside unit circles and Γ(s) exp(O(n)). Theorem 6.9 Suppose G(z) = s(z)/p(z) is (τ0, τ1, τ2)-strict-input passive. Moreover, suppose the roots of s(z) have magnitudes inside circle with radius α and Γ = Γ(s) exp(O(n)) and p H2 exp(O(n)). Then p(z) is α-acquiescent by extension of degree d = Oτ,α(n), and as a consequence we can learn G(z) with n + d states in polynomial time. Moreover, suppose in addition we assume that G(z) Cτ0,τ1,τ2 for every z on unit circle. Then p(z) is α-acquiescent by extension of degree d = Oτ,α(n). The proof of Theorem 6.9 is similar in spirit to that of Lemma 6.4, and is deferred to Section D. 6.3 Improper learning using linear regression In this subsection, we show that under stronger assumption than α-acquiescent by extension, we can improperly learn a linear dynamical system with linear regression, up to some fixed bias. The basic idea is to fit a linear function that maps [xk ℓ, . . . , xk] to yk. This is equivalent to a dynamical system with ℓhidden states and with the companion matrix A in (1.4) being chosen as aℓ= 1 and aℓ 1 = = a1 = 0. In this case, the hidden states exactly memorize all the previous ℓinputs, and the output is a linear combination of the hidden states. Equivalently, in the frequency space, this corresponds to fitting the transfer function G(z) = s(z)/p(z) with a rational function of the form c1zℓ 1+ +c1 zℓ 1 = c1z (ℓ 1) + + cn. The following is a sufficient condition on the characteristic polynomial p(x) that guarantees the existence of such fitting, Gradient Descent Learns Linear Dynamical Systems Definition 6.10 A polynomial p(z) of degree n is extremely-acquiescent by extension of degree d with bias ε if there exists a polynomial u(z) of degree d and leading coefficient 1 such that for all z on unit circle, p(z)u(z)/zn+d 1 ε (6.4) We remark that if p(z) is 1-acquiescent by extension of degree d, then there exists u(z) such that p(z)u(z)/zn+d C. Therefore, equation (6.4) above is a much stronger requirement than acquiescence by extension.7 When p(z) is extremely-acquiescent, we see that the transfer function G(z) = s(z)/p(z) can be approximated by s(z)u(z)/zn+d up to bias ε. Let ℓ= n+d+1 and s(z)u(z) = c1zℓ 1+ +cℓ. Then we have that G(z) can be approximated by the following dynamical system of ℓhidden states with ε bias: we choose A = CC(a) with aℓ= 1 and aℓ 1 = = a1 = 0, and C = [c1, . . . , cℓ]. As we have argued previously, such a dynamical system simply memorizes all the previous ℓinputs, and therefore it is equivalent to linear regression from the feature [xk ℓ, . . . , xk] to output yk. Proposition 6.11 (Informal) If the true system G(z) = s(z)/p(z) satisfies that p(z) is extremely-acquiescent by extension of degree d. Then using linear regression we can learn mapping from [xk ℓ, . . . , xk] to yk with bias ε and polynomial sampling complexity. We remark that with linear regression the bias ε will only go to zero as we increase the length ℓof the feature, but not as we increase the number of samples. Moreover, linear regression requires a stronger assumption than the improper learning results in previous subsections do. The latter can be viewed as an interpolation between the proper case and the regime where linear regression works. 7. Learning multi-input multi-output (MIMO) systems We consider multi-input multi-output systems with the transfer functions that have a common denominator p(z), G(z) = 1 p(z) S(z) (7.1) where S(z) is an ℓin ℓout matrix with each entry being a polynomial with real coefficients of degree at most n and p(z) = zn + a1zn 1 + + an. Note that here we use ℓin to denote the dimension of the inputs of the system and ℓout the dimension of the outputs. Although a special case of a general MIMO system, this class of systems still contains many interesting cases, such as the transfer functions studied in Fazel et al. (2001, 2004), where G(z) is assumed to take the form G(z) = R0 + Pn i=1 Ri z λi , for λ1, . . . , λn C with conjugate symmetry and Ri Cℓout ℓin satisfies that Ri = Rj whenever λi = λj. In order to learn the system G(z), we parametrize p(z) by its coefficients a1, . . . , an and S(z) by the coefficients of its entries. Note that each entry of S(z) depends on n + 1 real 7. We need (1 δ)-acquiescence by extension in previous subsections for small δ > 0, though this is merely additional technicality needed for the sample complexity. We ignore this difference between 1 δ-acquiescence and 1-acquiescence and for the purpose of this subsection Hardt, Ma, and Recht coefficients and therefore the collection of coefficients forms a third order tensor of dimension ℓout ℓin (n + 1). It will be convenient to collect the leading coefficients of the entries of S(z) into a matrix of dimension ℓout ℓin, named D, and the rest of the coefficients into a matrix of dimension ℓout ℓinn, denoted by C. This will be particularly intuitive when a state-space representation is used to learn the system with samples as discussed later. We parameterize the training transfer function ˆG(z) by ˆa, ˆC and ˆD using the same way. Let s define the risk function in the frequency domain as, g( ˆA, ˆC, ˆD) = Z 2π G(eiθ) ˆG(eiθ) 2 F dθ . (7.2) The following lemma is an analog of Lemma 3.3 for the MIMO case. Itss proof actually follows from a straightforward extension of the proof of Lemma 3.3 by observing that matrix S(z) (or ˆS(z)) commute with scalar p(z) and ˆp(z), and that ˆS(z), ˆp(z) are linear in ˆa, ˆC. Lemma 7.1 The risk function g(ˆa, ˆC) defined in (7.2) is τ-weakly-quasi-convex in the domain Nτ(a) = ˆa Rn : ℜ pa(z) τ/2, z C, s.t. |z| = 1 Rℓin ℓout n Finally, as alluded before, we use a particular state space representation for learning the system in time domain with example sequences. It is known that any transfer function of the form (7.1) can be realized uniquely by the state space system of the following special case of Brunovsky normal form Brunovsky (1970), 0 Iℓin 0 0 0 0 Iℓin 0 ... ... ... ... ... 0 0 0 Iℓin an Iℓin an 1Iℓin an 2Iℓin a1Iℓin 0 ... 0 Iℓin and, C Rℓout nℓin, D Rℓout ℓin . The following Theorem is a straightforward extension of Corollary 5.2 and Theorem 6.2 to the MIMO case. Theorem 7.2 Suppose transfer function G(z) of a MIMO system takes form (7.1), and has norm G H2 1. If the common denominator p(z) is α-acquiescent by extension of degree d then projected stochastic gradient descent over the state space representation (7.3) will return bΘ with risk f(bΘ) poly(n + d, σ, τ, (1 α) 1) We note that since A and B are simply the tensor product of Iℓin with CC(a) and en, the no blow-up property (Lemma 4.4) for Ak B still remains true. Therefore to prove Theorem 7.2, we essentially only need to run the proof of Lemma 5.5 with matrix notation and matrix norm. We defer the proof to the full version. Gradient Descent Learns Linear Dynamical Systems 8. Simulations In this section, we provide proof-of-concepts experiments on synthetic data. We will demonstrate that 1) plain SGD tends to blow up even with relatively small learning rate, especially on hard instances 2) SGD with our projection step converges with reasonably large learning rate, and with over-parameterization the final error is competitive 3) SGD with gradient clipping has the strongest performance in terms both of the convergence speed and the final error Here gradient clipping refers to the technique of using a normalized gradient instead of the true gradient. Specifically, for some positive hyper parameter B, we follow the approximate gradient ( g if g B Bg/ g otherwise This method is commonly applied in training recurrent neural networks Pascanu et al. (2013). Bullet 1) suggests that stability is indeed a real concern. Bullet 2) corroborates our theoretical study. Finding 3) suggests the instability of SGD partly arises from the noise in the batches, and such noise is reduced by the gradient clipping. Our experiments suggest that the landscape of the objective function may be even nicer than what is predicted by our theoretical development. It remains possible that the objective has no non-global local minima, possibly even outside the convex set to which our algorithm projects. We generate the true system with state dimension d = 20 by randomly picking the conjugate pairs of roots of the characteristic polynomial inside the circle with radius ρ = 0.95 and randomly generating the vector C from standard normal distribution. The distribution of the norm of the impulse response r (defined in Section 3) of such systems has a heavy-tail. When the norm of r is several magnitudes larger than the median it s difficult to learn the system. Thus we select systems with reasonable r for experiments, and we observe that the difficulty of learning increases as r increases. The inputs of the dynamical model are generated from standard normal distribution with length T = 500. We note that we generate new fresh inputs and outputs at every iterations and therefore the training loss is equal to the test loss (in expectation.) We use initial learning rate 0.01 in the projected gradient descent and SGD with gradient clipping. We use batch size 100 for all experiments, and decay the learning rate at 200K and 250K iteration by a factor of 10 in all experiments. Acknowledgments We thank Amir Globerson, Alexandre Megretski, Pablo A. Parrilo, Yoram Singer, Peter Stoica, and Ruixiang Zhang for helpful discussions. We are indebted to Mark Tobenkin for pointers to relevant prior work. We also thank Alexandre Megretski for helpful feedback, insights into passive systems and suggestions on how to organize Section 3. Hardt, Ma, and Recht Figure 2: The performance of projected stochastic gradient descent with overparameterization, vanilla SGD, and SGD with gradient clipping, on three different instance of dynamical systems with true state dimension = 20. The solid lines are from our proposed projected SGD with (over-parameterized) state dimension = 20, 25, 30, 35. The dot line corresponds to SGD with gradient clipped to Frobenius norm 1. The dashed lines correspond vanilla SGD and the triangle marker means the error blows up to infinity. The plot demonstrates the effect of the over-parameterization to our our algorithm. We note that the loss are different scales because the true systems in these three instances have different norms of impulse responses (which is equal to the loss of zero fitting). Alexandre S. Bazanella, Michel Gevers, Ljubisa Miskovic, and Brian D.O. Anderson. Iterative minimization of h2 control performance criteria. Automatica, 44:2549 2559, 2008. Badri Narayan Bhaskar, Gongguo Tang, and Benjamin Recht. Atomic norm denoising with applications to line spectral estimation. IEEE Transactions on Signal Processing, 61(23): 5987 5999, 2013. L eon Bottou. On-line learning in neural networks. chapter On-line Learning and Stochastic Approximations, pages 9 42. Cambridge University Press, New York, NY, USA, 1998. ISBN 0-521-65263-4. URL http://dl.acm.org/citation.cfm?id=304710.304720. Pavol Brunovsky. A classification of linear controllable systems. Kybernetika, 06(3):(173) 188, 1970. URL http://eudml.org/doc/28376. M. C. Campi and Erik Weyer. Finite sample properties of system identification methods. IEEE Transactions on Automatic Control, 47(8):1329 1334, 2002. Hugh Durrant-Whyte and Tim Bailey. Simultaneous localization and mapping: part i. Robotics & Automation Magazine, IEEE, 13(2):99 110, 2006. Diego Eckhard and Alexandre Sanfelice Bazanella. On the global convergence of identification of output error models. In Proc. 18th IFAC World congress, 2011. T. Estermann. Complex numbers and functions. Athlone Press, 1962. URL https:// books.google.com/books?id=ITbv AAAAMAAJ. Maryam Fazel, Haitham Hindi, and Stephen P Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proc. American Control Conference, volume 6, pages 4734 4739. IEEE, 2001. Gradient Descent Learns Linear Dynamical Systems Maryam Fazel, Haitham Hindi, and S Boyd. Rank minimization and applications in system theory. In Proc. American Control Conference, volume 4, pages 3273 3278. IEEE, 2004. Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points - online stochastic gradient for tensor decomposition. In Proc. 28th COLT, pages 797 842, 2015. E. Hazan, K. Y. Levy, and S. Shalev-Shwartz. Beyond Convexity: Stochastic Quasi-Convex Optimization. Ar Xiv e-prints, July 2015. Christiaan Heij, Andr e Ran, and Freek van Schagen. Introduction to mathematical systems theory : linear systems, identification and control. Birkh auser, Basel, Boston, Berlin, 2007. ISBN 3-7643-7548-5. URL http://opac.inria.fr/record=b1130636. Joao P Hespanha. Linear systems theory. Princeton university press, 2009. Sepp Hochreiter and J urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735 1780, 1997. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10. 1162/neco.1997.9.8.1735. Nicholas Kottenstette and Panos J Antsaklis. Relationships between positive real, passive dissipative, & positive systems. In American Control Conference (ACC), 2010, pages 409 416. IEEE, 2010. Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: isoperimetry and processes, volume 23. Springer Science & Business Media, 2013. J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht. Gradient Descent Converges to Minimizers. Ar Xiv e-prints, February 2016. Sergey Levine and Vladlen Koltun. Guided policy search. In Proceedings of The 30th International Conference on Machine Learning, pages 1 9, 2013. Lennart Ljung. System Identification. Theory for the user. Prentice Hall, Upper Saddle River, NJ, 2nd edition, 1998. Alexandre Megretski. Convex optimization in robust identification of nonlinear feedback. In Proceedings of the 47th Conference on Decision and Control, 2008. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310 1318, 2013. Ali Rahimi, Ben Recht, and Trevor Darrell. Learning appearance manifolds from video. In Proc. IEEE CVPR, 2005. A. C. Schaeffer. Inequalities of a. markoffand s. bernstein for polynomials and related functions. Bull. Amer. Math. Soc., 47(8):565 579, 08 1941. URL http://projecteuclid. org/euclid.bams/1183503783. Hardt, Ma, and Recht Parikshit Shah, Badri Narayan Bhaskar, Gongguo Tang, and Benjamin Recht. Linear system identification via atomic norm regularization. In Proceedings of the 51st Conference on Decision and Control, 2012. Torsten S oderstr om and Petre Stoica. Some properties of the output error method. Automatica, 18(1):93 99, 1982. Petre Stoica and Torsten S oderstr om. Uniqueness of prediction error estimates of multivariable moving average models. IFAC Proceedings Volumes, 15(4):199 204, 1982. Petre Stoica and Torsten S oderstr om. Uniqueness of estimated k-step prediction models of arma processes. Systems & control letters, 4(6):325 331, 1984. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Proc. 27th NIPS, pages 3104 3112, 2014. URL http://papers.nips.cc/ paper/5346-sequence-to-sequence-learning-with-neural-networks. M. Vidyasagar and Rajeeva L. Karandikar. A learning theory approach to system identification and stochastic adaptive control. Journal of Process Control, 18(3):421 430, 2008. Erik Weyer and M. C. Campi. Finite sample properties of system identification methods. In Proceedings of the 38th Conference on Decision and Control, 1999. Appendix A. Background on optimization The proof below uses the standard analysis of gradient descent for non-smooth objectives and demonstrates that the argument still works for weakly-quasi-convex functions. Proof [Proof of Proposition 2.3] We start by using the weakly-quasi-convex condition and then the rest follows a variant of the standard analysis of non-smooth projected sub-gradient descent8. We conditioned on θk, and have that τ(f(θk) f(θ )) f(θk) (θk θ ) = E[r(θk) (θk θ ) | θk] η(θk wk+1)(θk θ ) | θk η E θk wk+1 2 | θk + θk θ 2 E wk+1 θ 2 | θk = η E r(θk) 2 + 1 η θk θ 2 E wk+1 θ 2 | θk (A.1) where the first inequality uses weakly-quasi-convex and the rest of lines are simply algebraic manipulations. Since θk+1 is the projection of wk+1 to B and θ belongs to B, we have wk+1 θ θk+1 θ . Together with (A.1), and E r(θk) 2 = f(θk) 2 + Var[r(θk)] Γ(f(θk) f(θ )) + V, 8. Although we used weak smoothness to get a slightly better bound Gradient Descent Learns Linear Dynamical Systems we obtain that τ(f(θk) f(θ )) ηΓ(f(θk) f(θ )) + ηV + 1 η θk θ 2 E θk+1 θ 2 | θk . Taking expectation over all the randomness and summing over k we obtain that k=0 E [f(θk) f(θ )] 1 τ ηΓ η θ0 θ 2 1 τ ηΓ where we use the assumption that θ0 θ R. Suppose K 4R2Γ2 V τ 2 , then we take η = R V K . Therefore we have that τ ηΓ τ/2 and therefore k=0 E [f(θk) f(θ )] 4R K τ . (A.2) On the other hand, if K 4R2Γ2 V τ 2 , we pick η = τ 2Γ and obtain that k=0 E [f(θk) f(θ )] 2 τ 2 . (A.3) Therefore using equation (A.3) and (A.2) we obtain that when choosing η properly according to K as above, E k [K] [f(θk) f(θ )] max Appendix B. Toolbox Lemma B.1 Let B = en Rn 1 and λ [0, 2π], w C. Suppose A with ρ(A) |w| < 1 has the controllable canonical form A = CC(a). Then (I w A) 1B = 1 pa(w 1) w 2 ... w n where pa(x) = xn + a1xn 1 + + an is the characteristic polynomial of A. Proof let v = (I w A) 1B then we have (I w A)v = B. Note that B = en, and I w A is of the form 1 w 0 0 0 1 w 0 ... ... ... ... ... 0 0 0 w anw an 1w an 2w 1 + a1w Hardt, Ma, and Recht Therefore we obtain that vk = wvk+1 for 1 k n 1. That is, vk = v0w k for v0 = v1w1. Using the fact that ((I w A)v)n = 1, we obtain that v0 = pa(w 1) 1 where pa( ) is the polynomial pa(x) = xn+a1xn 1+ +an. Then we have that u(I w A) 1B = u1w 1+ +unw n Lemma B.2 Suppose x1, . . . , xn are independent variables with mean 0 and covariance matrices and I, U1, . . . , Ud are fixed matrices, then E Pn k=1 Ukxk 2 = Pn k=1 Uk 2 F . Proof We have that E Pn k=1 Ukxk 2 F = E Pn k,ℓtr(Ukxkx ℓU ℓ) = Pn k tr(Ukxkx k U k ) = Pn k=1 Uk 2 F Appendix C. Missing proofs in Sections 4 and 5 C.1 Monotonicity of acquiescence: Proof of Lemma 4.3 Lemma C.1 (Lemma 4.3 restated) For any 0 < α < β, we have that Bα Bβ. Proof Let qa(z) = 1 + a1z + + anzn. Note that q(z 1) = pa(z)/zn. Therefore we note that Bα = {a : qa(z) C, |z| = 1/α}. Suppose a Bα, then ℜ(qa(z)) τ1 for any z with |z| = 1/α. Since ℜ(qa(z)) is the real part of the holomorphic function qa(z), its a harmonic function. By maximum (minimum) principle of the harmonic functions, we have that for any |z| 1/α, ℜ(qa(z)) inf|z|=1/α ℜ(qa(z)) τ1. In particular, it holds that for |z| = 1/β < 1/α, ℜ(qa(z)) τ1. Similarly we can prove that for z with |z| = 1/β, ℜ(qa(z)) (1 + τ0)ℑ(qa(z)), and other conditions for a being in Bβ. C.2 Proof of Lemma 5.4 Lemma 5.4 follows directly from the following general Lemma which also handles the multiinput multi-output case. It can be seen simply from calculation similar to the proof of Lemma 3.5. We mainly need to control the tail of the series using the no-blow up property (Lemma 4.4) and argue that the wrong value of the initial states h0 won t cause any trouble to the partial loss function ℓ((x, y), bΘ) (defined in Algorithm 1). This is simply because after time T1 = T/4, the influence of the initial state is already washed out. Lemma C.2 In algorithm 3 the values of GA, GC, GD are equal to the gradients of g( ˆA, ˆC)+ ( ˆD D)2 with respect to ˆA, ˆC and ˆD up to inverse exponentially small error. Proof [Proof of Lemma C.2] We first show that the partial empirical loss function ℓ((x, y), bΘ) has expectation almost equal to the idealized risk (up to the term for ˆD and exponential small error), E[ℓ((x, y), bΘ)] = g( ˆA, ˆC) + ( ˆD D)2 exp( Ω((1 α)T)). Gradient Descent Learns Linear Dynamical Systems This can be seen simply from similar calculation to the proof of Lemma 3.5. Note that k=1 CAt k 1Bxk + CAt 1h0 + ξt and yt = ˆDxt + k=1 ˆC ˆAt k 1 ˆBxk . (C.1) Therefore noting that when t T1 Ω(T), we have that CAt 1h0 exp( Ω((1 α)T) and therefore the effect of h0 is negligible. Then we have that E[ℓ((x, y), bΘ)] = 1 T T1 E t>T1 yt yt 2 exp( Ω((1 α)T)) = ˆD D 2 + 1 T T1 0 j t 1 ˆC ˆAj B CAj B 2 exp( Ω((1 α)T)) j=0 ˆC ˆAj B CAj B 2 + X T j T T1 ˆC ˆAj B CAj B 2 exp( Ω((1 α)T)) j=0 ˆC ˆAj B CAj B 2 exp( Ω((1 α)T)) . where the first line use the fact that CAt 1h0 exp( Ω((1 α)T), the second uses equation (3.10) and the last line uses the no-blowing up property of Ak B (Lemma 4.4). Similarly, we can prove that the gradient of E[ℓ((x, y), bΘ)] is also close to the gradient of g( ˆA, ˆC) + ( ˆD D)2 up to inverse exponential error. C.3 Proof of Lemma 5.5 Proof [Proof of Lemma 5.5] Both GA and GC can be written in the form of a quadratic form (with vector coefficients) of x1, . . . , x T and ξ1, . . . , ξT . That is, we will write s,t xsxtust + X s,t xsξtu st and GC = X s,t xsxtvst + X s,t xsξtv st . where ust and vst are vectors that will be calculated later. By Lemma C.3, we have that s,t xsxsust + X s,t xsξtu st s,t ust 2 + O(σ2) X s,t u st 2 . (C.2) Therefore in order to bound from above Var [GA], it suffices to bound P ust 2 and P u st 2, and similarly for GC. We begin by writing out ust for fixed s, t [T] and bounding its norm. We use the same set of notations as int the proof of Lemma 5.4. Recall that we set rk = CAk B and ˆrk = ˆC ˆAk B, and rk = ˆrk rk. Moreover, let zk = ˆAk B. We note that the sums of zk 2 Hardt, Ma, and Recht and r2 k can be controlled. By the assumption of the Lemma, we have that k=t zk 2 2πnτ 2 1 , zk 2 2πnα2k 2nτ 2 1 . (C.3) k=t r2 k 4πnτ 2 1 , rk 2 4πnα2k 2nτ 2 1 . (C.4) which will be used many times in the proof that follows. We calculate the explicit form of GA using the explicit back-propagation Algorithm 3. We have that in Algorithm 3, ˆhk = Pk j=1 ˆAk j Bxj = Pk j=1 zk jxj (C.5) j=k ( ˆA )j k ˆC yj = j=k αj k( ˆA )j k ˆC 1j>T1 Then using GA = P k 2 B hkh k 1 and equation (C.5) and equation (C.6) above, we have that ust = PT k=2 P j max{k,s,T1+1} rj s ˆC ˆAj k B 1k t+1 ˆAk t 1B = PT k=2 P j max{k,s,T1+1} rj sˆrj k 1k t+1 zk t 1 . (C.7) k=2 zk 1 s 1k s+1 ˆr t k 1t>max{T1,k} = X s+1 k t zk 1 s ˆr t k 1t>max{T1} (C.8) Towards bounding ust , we consider four different cases. Let Λ = Ω {max{n, (1 α) 1 log( 1 1 α)} be a threshold. Case 1: When 0 s t Λ, we rewrite ust by rearranging equation (C.7), T k s zk t 1 X j max{k,T1+1} rj sˆrj k + X tk>t ˆrℓ+s kzk t 1 Gradient Descent Learns Linear Dynamical Systems where at the second line, we did the change of variables ℓ= j s. Then by Cauchy-Schartz inequality, we have, ℓ 0,ℓ T1+1 s r2 ℓ ℓ 0,ℓ T1+1 s s k l+s,k T ˆrℓ+s kzk t 1 ℓ 0,ℓ T1+1 s r2 ℓ ℓ 0,ℓ T1+1 s s>k>t ˆrℓ+s kzk t 1 We could bound the contribution from r2 k ssing equation (C.4), and it remains to bound terms T1 and T2. Using the tail bounds for zk (equation (C.3)) and the fact that |ˆrk| = | ˆC ˆAk B| ˆAk B = zk , we have that ℓ 0,ℓ T1+1 s s k l+s,k T ˆrℓ+s kzk t 1 s k ℓ+s |ˆrℓ+s k| zk t 1 We bound the inner sum of RHS of (C.10) using the fact that zk 2 O(nα2k 2n/τ 2 1 ) and obtain that, X s k ℓ+s |ˆrℓ+s k| zk t 1 X s k ℓ+s O(nα(ℓ+s t 1) 2n/τ 2 1 ) O(ℓnα(ℓ+s t 1) 2n/τ 2 1 ) . (C.11) Note that equation (C.11) is particular effective when ℓ> Λ. When ℓ Λ, we can refine the bound using equation (C.3) and obtain that s k ℓ+s |ˆrℓ+s k| zk t 1 s k ℓ+s |ˆrℓ+s k|2 s k ℓ+s zk t 1 2 O( n/τ1) O( n/τ1) = O(n/τ 2 1 ) . (C.12) Plugging equation (C.12) and (C.11) into equation (C.10), we have that s k ℓ+s |ˆrℓ+s k| zk t 1 Λ ℓ 0 O(n2/τ 4 1 ) + X ℓ>Λ O(ℓ2n2α2(ℓ+s t 1) 4n/τ 4 1 ) O(n2Λ/τ 4 1 ) + O(n2/τ 4 1 ) = O(n2Λ/τ 4 1 ) . (C.13) For the second term in equation (C.9), we bound similarly, ℓ 0,ℓ T1+1 s s>k>t ˆrℓ+s kzk t 1 2 O(n2Λ/τ 4 1 ) . (C.14) Hardt, Ma, and Recht Therefore using the bounds for T1 and T2 we obtain that, ust 2 O(n3Λ/τ 6 1 ) (C.15) Case 2: When s t > Λ, we tighten equation (C.13) by observing that, s k ℓ+s |ˆrℓ+s k| zk t 1 α2(s t 1) 4n X ℓ 0 O(ℓ2n2α2ℓ/τ 4 1 ) αs t 1 O(n2/(τ 4 1 (1 α)3)) . (C.16) where we used equation (C.11). Similarly we can prove that T2 αs t 1 O(n2/(τ 4 1 (1 α)3)) . Therefore, we have when s t Λ, ust 2 O(n3/((1 α)3τ 6 1 )) αs t 1 . (C.17) Case 3: When Λ s t 0, we can rewrite ust and use the Cauchy-Schwartz inequality and obtain that T k t+1 zk t 1 X j max{k,T1+1} rj sˆrj k = X ℓ 0,ℓ T1+1 s rℓ X t+1 k l+s,k T ˆrℓ+s kzk t 1 . ℓ 0,ℓ T1+1 s r2 ℓ ℓ 0,ℓ T1+1 s t+1 k l+s,k T ˆrℓ+s kzk t 1 Using almost the same arguments as in equation (C.11) and (C.12), we that t+1 k ℓ+s |ˆrℓ+s k| zk t 1 O(ℓnα(ℓ+s t 1) 2n/τ 2 1 ) t+1 k ℓ+s |ˆrℓ+s k| zk t 1 O( n/τ1) O( n/τ1) = O(n/τ 2 1 ) . Then using a same type of argument as equation (C.13), we can have that ℓ 0,ℓ T1+1 s t+1 k l+s,k T ˆr ℓ+s kz k t 1 O(n2Λ/τ 4 1 ) + O(n2/τ 4 1 ) = O(n2Λ/τ 4 1 ) . It follows that in this case ust can be bounded with the same bound in (C.15). Gradient Descent Learns Linear Dynamical Systems Case 4: When s t Λ, we use a different simplification of ust from above. First of all, it follows (C.7) that j max{k,s,T1+1} rj sˆr j kzk t 1 1k t+1 k t+1 z k t 1 X j max{k,T1+1} | rj sˆr j k| . Since j s k s > 4n and it follows that X j max{k,T1+1} | rj sˆr j k| X j max{k,T1+1} O( n/τ1 αj s n) O( n/τ1 αj k n) O(n/(τ 2 1 (1 α)) αk s n) Then we have that k t+1 z k t 1 X j max{k,T1+1} | rj sˆr j k| k t+1 z k t 1 2 j max{k,T1+1} | rj sˆr j k| O(n/τ 2 1 ) O(n2/(τ 4 1 (1 α)3)αt s) = O(n3/(τ 6 1 δ3)αt s) Therefore, using the bound for ust 2 obtained in the four cases above, taking sum over s, t, we obtain that X 1 s,t T ust 2 X s,t [T]:|s t| Λ O(n3Λ/τ 6 1 ) + X s,t:|s t| Λ O(n3/(τ 6 1 (1 α)3)α|t s| 1) O(Tn3Λ2/τ 6 1 ) + O(n3/τ 6 1 ) = O(Tn3Λ2/τ 6 1 ) . (C.19) We finished the bounds for ust and now we turn to bound u st 2. Using the formula for u st (equation C.8), we have that for t s + 1, u st = 0. For s + Λ t s + 2, we have that by Cauchy-Schwartz inequality, s+1 k t zk 1 s 2 s+1 k t |ˆr t k|2 O(n/τ 2 1 ) O(n/τ 2 1 ) . On the other hand, for t > s + Λ, by the bound that |ˆr k|2 z k 2 O(nα2k 2n/τ 2 1 ), we have, s+1 k t 1 zk 1 s |ˆr t k| s+1 k t 1 nαt s 1/τ 2 1 O(n(t s)αt s 1/τ 2 1 ) . Hardt, Ma, and Recht Therefore taking sum over s, t, similarly to equation (C.19), X s,t [T] u st 2 O(Tn2Λ/τ 4 1 ) . (C.20) Then using equation (C.2) and equation (C.19) and (C.20), we obtain that Var[ GA 2] O Tn3Λ2/τ 6 1 + σ2Tn2Λ/τ 4 1 . Hence, it follows that Var[GA] 1 (T T1)2 Var[GA] O n3Λ2/τ 6 1 + σ2n2Λ/τ 4 1 We can prove the bound for GC similarly. Lemma C.3 Let x1, . . . , x T be independent random variables with mean 0 and variance 1 and 4-th moment bounded by O(1), and uij be vectors for i, j [T]. Moreover, let ξ1, . . . , ξT be independent random variables with mean 0 and variance σ2 and u ij be vectors for i, j [T]. Then, Var h P i,j xixjuij + P i,j xiξju ij i O(1) P i,j uij 2 + O(σ2) P i,j u ij 2 . Proof Note that the two sums in the target are independent with mean 0, therefore we only need to bound the variance of both sums individually. The proof follows the linearity of expectation and the independence of xi s: P i,j xixjuij 2 = X k,ℓ E h xixjxkxℓu ijukℓ i i E[u iiuiix4 i ] + X i =j E[u iiujjx2 i x2 j] + X i,j E h x2 i x2 j(u ijuij + u ijuji) i i,j u iiujj + O(1) X i,j uij + uji 2 = P i uii 2 + O(1) X where at second line we used the fact that for any monomial xα with an odd degree on one of the xi s, E[xα] = 0. Note that E[P i,j xixjuij] = P i uii. Therefore, Var h P i,j xixjuij i = E h P i,j xixjuij 2i E[P i,j xixjuij] 2 O(1) P i,j uij 2 (C.21) Similarly, we can control Var h P i,j xiξju ij i by O(σ2) P i,j u ij 2. Gradient Descent Learns Linear Dynamical Systems Appendix D. Missing proofs in Section 6 D.1 Proof of Lemma 6.5 Towards proving Lemma 6.5, we use the following lemma to express the inverse of a polynomial as a sum of inverses of degree-1 polynomials. Lemma D.1 Let p(z) = (z λ1) . . . (z λn) where λj s are distinct. Then we have that tj z λj , where tj = Q i =j(λj λi) 1 . (D.1) Proof [Proof of Lemma D.1] By interpolating constant function at points λ1, . . . , λn using Lagrange interpolating formula, we have that Q i =j(x λi) Q i =j(λj λi) 1 (D.2) Dividing p(z) on both sides we obtain equation (D.1). The following lemma computes the Fourier transform of function 1/(z λ). Lemma D.2 Let m Z, and K be the unit circle in complex plane, and λ C inside the K. Then we have that Z z λ dz = 2πiλm for m 0 0 o.w. Proof [Proof of Lemma D.2] For m 0, since zm is a holomorphic function, by Cauchy s integral formula, we have that Z z λ dz = 2πiλm . For m < 0, by changing of variable y = z 1 we have that Z since |λy| = |λ| < 1, then we by Taylor expansion we have, 1 λy dy = Z k=0 (λy)k ! Since the series λy is dominated by |λ|k which converges, we can switch the integral with the sum. Note that y m 1 is holomorphic for m < 0, and therefore we conclude that Z 1 λy dy = 0 . Hardt, Ma, and Recht Now we are ready to prove Lemma 6.5. Proof [Proof of Lemma 6.5] Let m = n+d. We compute the Fourier transform of zm/p(z). That is, we write eimθ k= βkeikθ . p(eiθ) dθ = 1 2πi By Lemma D.1, we write 1 p(z) = Then it follows that Using Lemma D.2, we obtain that βk = Pn j=1 tjλm k 1 j if k m 1 0 o.w. (D.3) We claim that j=1 tjλn 1 j = 1 , and j=1 tjλs j = 0 , 0 s < n 1 . Indeed these can be obtained by writing out the lagrange interpolation for polynomial f(x) = xs with s n 1 and compare the leading coefficient. Therefore, we further simplify βk to Pn j=1 tjλm k 1 j if < k < m n 1 if k = m n 0 o.w. (D.4) Let h(z) = P k 0 βkzk. Then we have that h(z) is a polynomial with degree d = m n and leading term 1. Moreover, for our choice of d, p(z) h(z) = k<0 |βk| max j |tj|(1 λj)n X k<0 (1 γ)d k 1 Γ(1 γ)d/γ < ζ . Gradient Descent Learns Linear Dynamical Systems D.2 Proof of Theorem 6.9 Theorem 6.9 follows directly from a combination of Lemma D.3 and Lemma D.4 below. Lemma D.3 shows that the denominator of a function (under the stated assumptions) can be extended to a polynomial that takes values in C+ on unit circle. Lemma D.4 shows that it can be further extended to another polynomial that takes values in C. Lemma D.3 Suppose the roots of s are inside circle with radius α < 1, and Γ = Γ(s). If transfer function G(z) = s(z)/p(z) satisfies that G(z) Cτ0,τ1,τ2 (or G(z) C+ τ0,τ1,τ2) for any z on unit circle, then there exists u(z) of degree d = Oτ(max{( 1 1 α log 1 α , 0}) such that p(z)u(z)/zn+d Cτ 0,τ 1,τ 2 (or p(z)u(z)/zn+d C+ τ 0,τ 1,τ 2 respectively) for τ = Θτ(1) , where Oτ( ), Θτ( ) hide the polynomial dependencies on τ0, τ1, τ2. Proof [Proof of Lemma D.3] By the fact that G(z) = s(z)/p(z) Cτ0,τ1,τ2, we have that p(z)/s(z) Cτ 0,τ 1,τ 2 for some τ that polynomially depend on τ. Using Lemma 6.5, there exists u(z) of degree d such that zn+d s(z) u(z) ζ . where we set ζ min{τ 0, τ 1}/τ 2 p 1 H . Then we have that p(z)u(z)/zn+d p(z) |p(z)|ζ min{τ 0, τ 1}. (D.5) It follows from equation (D.5) implies that that p(z)u(z)/zn+d Cτ 0 ,τ 1 ,τ 2 , where τ polynomially depends on τ. The same proof still works when we replace C by C+. Lemma D.4 Suppose p(z) of degree n and leading coefficient 1 satisfies that p(z) C+ τ0,τ1,τ2 for any z on unit circle. Then there exists u(z) of degree d such that p(z)u(z)/zn+d Cτ 0,τ 1,τ 2 for any z on unit circle with d = Oτ(n) and τ 0, τ 1, τ 2 = Θτ(1), where Oτ( ), Θτ( ) hide the dependencies on τ0, τ1, τ2. Proof [Proof of Lemma D.4] We fix z on unit circle first. Let s defined p p(z)/zn be the square root of p(z)/zn with principle value. Let s write p(z)/zn = τ2(1 + ( p(z) τ2zn 1)) and we take Taylor expansion for 1 p(z)/zn = τ 1/2 2 (1+( p(z) τ2zn 1)) 1/2 = τ 1/2 2 P k=0( p(z) Note that since τ1 < |p(z)| < τ2, we have that | p(z) τ2zn 1| < 1 τ1/τ2. Therefore truncating the Taylor series at k = Oτ(1) we obtain a polynomial a rational function h(z) of the form h(z) = Pk j 0( p(z) which approximates 1 p(z)/zn with precision ζ min{τ0, τ1}/τ2, that is, 1 p(z)/zn h(z) ζ . Therefore, we obtain that p(z)h(z) p(z)/zn ζ|p(z)/zn| ζτ2 . Note that since Hardt, Ma, and Recht p(z)/zn C+ τ0,τ1,τ2, we have that p p(z)/zn Cτ 0,τ 1,τ 2 for some constants τ 0, τ 1, τ 2. There- fore p(z)h(z) zn Cτ 0,τ 1,τ 2. Note that h(z) is not a polynomial yet. Let u(z) = znkh(z) and then u(z) is polynomial of degree at most nk and p(z)u(z)/z(n+1)k Cτ 0,τ 1,τ 2 for any z on unit circle. Appendix E. Back-propagation implementation In this section we give a detailed implementation of using back-propagation to compute the gradient of the loss function. The algorithm is for general MIMO case with the parameterization (7.3). To obtain the SISO sub-case, simply take ℓin = ℓout = 1. Algorithm 3 Back-propagation Parameters: ˆa Rn, ˆC Rℓin nℓout, and ˆD Rℓin ℓout. Let ˆA = MCC(ˆa) = CC(ˆa) Iℓin and ˆB = en Iℓin. Input: samples ((x(1), y1), . . . , x(N), y(N)) and projection set Bα. for each sample (x(j), yj) = ((x1, . . . , x T ), (y1, . . . , y T )) do Feed-forward pass: h0 = 0 Rnℓin. for k = 1 to T ˆhk ˆAhk 1 + ˆBxk, ˆyt ˆChk + ˆDxk and ˆhk ˆAhk 1 + ˆBxk. end for Back-propagation: h T+1 0, GA 0, GC 0. GD 0 T1 T/4 for k = T to 1 if k > T1, yk ˆyk yk, o.w. yk 0. Let hk ˆC yk + ˆA hk+1. update GC GC + 1 T T1 ykˆhk, GA GA 1 T T1 B hkˆh k 1, and GD GD + 1 T T1 ykxk. end for Gradient update: ˆA ˆA η GA, ˆC ˆC η GC, ˆD ˆD η GD. Projection step: Obtain ˆa from ˆA and set ˆa ΠB(ˆa), and ˆA = MCC(ˆa) end for Appendix F. Projection to the set Bα In order to have a fast projection algorithm to the convex set Bα, we consider a grid GM of size M over the circle with radius α. We will show that M = Oτ(n) will be enough to approximate the set Bα in the sense that projecting to the approximating set suffices for the convergence. Let B α,τ0,τ1,τ2 = {a : pa(z)/zn Cτ0,τ1,τ2, z GM} and Bα,τ0,τ1,τ2 = {a : pa(z)/zn Cτ0,τ1,τ2, |z| = α}. Here Cτ0,τ1,τ2 is defined the same as before though we used the subscript Gradient Descent Learns Linear Dynamical Systems to emphasize the dependency on τi s, Cτ0,τ1,τ2 = {z : ℜz (1 + τ0)|ℑz|} {z : τ1 < ℜz < τ2} . (F.1) We will first show that with M = Oτ(n), we can make B α,τ1,τ2,τ3 to be sandwiched within to two sets Bα,τ0,τ1,τ2 and Bα,τ 0,τ 1,τ 2. Lemma F.1 For any τ0 > τ 0, τ1 > τ 1, τ2 < τ 2, we have that for M = Oτ(n), there exists κ0, κ1, κ2 that polynomially depend on τi, τ i s such that Bα,τ0,τ1,τ2 B α,κ0,κ1,κ2 Bα,τ 0,τ 1,τ 2 Before proving the lemma, we demonstrate how to use the lemma in our algorithm: We will pick τ 0 = τ0/2, τ 1 = τ1/2 and τ 2 = 2τ2, and find κi s guaranteed in the lemma above. Then we use B α,κ0,κ1,κ2 as the projection set in the algorithm (instead of Bα,τ0,τ1,τ2)). First of all, the ground-truth solution Θ is in the set B α,κ0,κ1,κ2. Moreover, since B α,κ0,κ1,κ2 Bα,τ 0,τ 1,τ 2, we will guarantee that the iterates bΘ will remain in the set Bα,τ 0,τ 1,τ 2 and therefore the quasi-convexity of the objective function still holds9. Note that the set B α,κ0,κ1,κ2 contains O(n) linear constraints and therefore we can use linear programming to solve the projection problem. Moreover, since the points on the grid forms a Fourier basis and therefore Fast Fourier transform can be potentially used to speed up the projection. Finally, we will prove Lemma F.1. We need S. Bernstein s inequality for polynomials. Theorem F.2 (Bernstein s inequality, see, for example, Schaeffer (1941)) Let p(z) be any polynomial of degree n with complex coefficients. Then, sup |z| 1 |p (z)| n sup |z| 1 |p(z)|. We will use the following corollary of Bernstein s inequality. Corollary F.3 Let p(z) be any polynomial of degree n with complex coefficients. Then, for m = 20n, sup |z| 1 |p (z)| 2n sup k [m] |p(e2ikπ/m)|. Proof For simplicity let τ = supk [m] |p(e2ikπ/m)|, and let τ = supk [m] |p(e2ikπ/m)|. If τ 2τ then we are done by Bernstein s inequality. Now let s assume that τ > 2τ. Suppose p(z) = τ . Then there exists k such that |z e2πik/m| 4/m and |p(e2πik/m)| τ. Therefore by Cauchy mean-value theorem we have that there exists ξ that lies between z and e2πik/m such that p (ξ) m(τ τ)/4 1.1nτ , which contradicts Bernstein s inequality. Lemma F.4 Suppose a polynomial of degree n satisfies that |p(w)| τ for every w = αe2iπk/m for some m 20n. Then for every z with |z| = α there exists w = αe2iπk/m such that |p(z) p(w)| O(nατ/m). 9. with a slightly worse parameter up to constant factor since τi s are different from τi s up to constant factors Hardt, Ma, and Recht Proof Let g(z) = p(αz) by a polynomial of degree at most n. Therefore we have g (z) = αp(z). Let w = αe2iπk/m such that |z w| O(α/m). Then we have |p(z) p(w)| = |g(z/α) p(w/α)| sup |x| 1 |g (x)| 1 (By Cauchy s mean-value Theorem) sup |x| 1 |p (x)| |z w| nτ|z w| . (Corallary F.3) Now we are ready to prove Lemma F.1. Proof [Proof of Lemma F.1] We choose κi = 1 2(τi + τ i).The first inequality is trivial. We prove the second one. Consider a such that a Bα,κ0,κ1,κ2. We wil show that a B α,τ 0,τ 1,τ 2. Let qa(z) = p(z 1)zn. By Lemma F.4, for every z with |z| = 1/α, we have that there exists w = α 1e2πik/M for some integer k such that |qa(z) qa(w)| O(τ2n/(αM)). Therefore let M = cn for sufficiently large c (which depends on τi s), we have that for every z with |z| = 1/α, qa(z) Cτ 0,τ 1,τ 2. This completes the proof.