# probabilistic_linear_multistep_methods__6e7ce947.pdf Probabilistic Linear Multistep Methods Onur Teymur Department of Mathematics Imperial College London o@teymur.uk Konstantinos Zygalakis School of Mathematics University of Edinburgh k.zygalakis@ed.ac.uk Ben Calderhead Department of Mathematics Imperial College London b.calderhead@imperial.ac.uk We present a derivation and theoretical investigation of the Adams-Bashforth and Adams-Moulton family of linear multistep methods for solving ordinary differential equations, starting from a Gaussian process (GP) framework. In the limit, this formulation coincides with the classical deterministic methods, which have been used as higher-order initial value problem solvers for over a century. Furthermore, the natural probabilistic framework provided by the GP formulation allows us to derive probabilistic versions of these methods, in the spirit of a number of other probabilistic ODE solvers presented in the recent literature [1, 2, 3, 4]. In contrast to higher-order Runge-Kutta methods, which require multiple intermediate function evaluations per step, Adams family methods make use of previous function evaluations, so that increased accuracy arising from a higher-order multistep approach comes at very little additional computational cost. We show that through a careful choice of covariance function for the GP, the posterior mean and standard deviation over the numerical solution can be made to exactly coincide with the value given by the deterministic method and its local truncation error respectively. We provide a rigorous proof of the convergence of these new methods, as well as an empirical investigation (up to fifth order) demonstrating their convergence rates in practice. 1 Introduction Numerical solvers for differential equations are essential tools in almost all disciplines of applied mathematics, due to the ubiquity of real-world phenomena described by such equations, and the lack of exact solutions to all but the most trivial examples. The performance speed, accuracy, stability, robustness of the numerical solver is of great relevance to the practitioner. This is particularly the case if the computational cost of accurate solutions is significant, either because of high model complexity or because a high number of repeated evaluations are required (which is typical if an ODE model is used as part of a statistical inference procedure, for example). A field of work has emerged which seeks to quantify this performance or indeed lack of it by modelling the numerical errors probabilistically, and thence trace the effect of the chosen numerical solver through the entire computational pipeline [5]. The aim is to be able to make meaningful quantitative statements about the uncertainty present in the resulting scientific or statistical conclusions. Recent work in this area has resulted in the development of probabilistic numerical methods, first conceived in a very general way in [6]. An recent summary of the state of the field is given in [7]. The particular case of ODE solvers was first addressed in [8], formalised and extended in [1, 2, 3] with a number of theoretical results recently given in [4]. The present paper modifies and extends the constructions in [1, 4] to the multistep case, improving the order of convergence of the method but avoiding the simplifying linearisation of the model required by the approaches of [2, 3]. Furthermore we offer extensions to the convergence results in [4] to our proposed method and give empirical results confirming convergence rates which point to the practical usefulness of our higher-order approach without significantly increasing computational cost. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 1.1 Mathematical setup We consider an Initial Value Problem (IVP) defined by an ODE d dty(t, ) = f(y(t, ), t), y(t0, ) = y0 (1) Here y( , ) : R+ ! Rd is the solution function, f : Rd R+ ! Rd is the vector-valued function that defines the ODE, and y0 2 Rd is a given vector called the initial value. The dependence of y on an m-dimensional parameter 2 Rm will be relevant if the aim is to incorporate the ODE into an inverse problem framework, and this parameter is of scientific interest. Bayesian inference under this setup (see [9]) is covered in most of the other treatments of this topic but is not the main focus of this paper; we therefore suppress for the sake of clarity. Some technical conditions are required in order to justify the existence and uniqueness of solutions to (1). We assume that f is evaluable point-wise given y and t and also that it satisfies the Lipschitz condition in y, namely ||f(y1, t) f(y2, t)|| Lf||y1 y2|| for some Lf 2 R+ and all t, y1 and y2; and also is continuous in t. These conditions imply the existence of a unique solution, by a classic result usually known as the Picard-Lindelöf Theorem [10]. We consider a finite-dimensional discretisation of the problem, with our aim being to numerically generate an N-dimensional vector1 y1:N approximating the true solution y(t1:N) in an appropriate sense. Following [1], we consider the joint distribution of y1:N and the auxiliary variables f0:N (obtained by evaluating the function f), with each yi obtained by sequentially conditioning on previous evaluations of f. A basic requirement is that the marginal mean of y1:N should correspond to some deterministic iterative numerical method operating on the grid t1:N. In our case this will be a linear multistep method (LMM) of specified type. 2 Firstly we telescopically factorise the joint distribution as follows: p(y1:N, f0:N|y0) = p(f0|y0) p(yi+1|y0:i, f0:i) p(fi+1|y0:i+1, f0:i) (2) We can now make simplifying assumptions about the constituent distributions. Firstly since we have assumed that f is evaluable point-wise given y and t, p(fi|yi, . . . ) = p(fi|yi) = δfi , (3) which is a Dirac-delta measure equivalent to simply performing this evaluation deterministically. Secondly, we assume a finite moving window of dependence for each new state in other words yi+1 is only allowed to depend on yi and fi, fi 1, . . . , fi (s 1) for some s 2 N. This corresponds to the inputs used at each iteration of the s-step Adams-Bashforth method. For i < s we will assume dependence on only those derivative evaluations up to i; this initialisation detail is discussed briefly in Section 4. Strictly speaking, f N is superfluous to our requirements (since we already have y N) and thus we can rewrite (2) as p(y1:N, f0:N 1|y0) = p(fi|yi) p(yi+1|yi, fmax(0,i s+1):i) (4) δfi(f(yi, ti)) p(yi+1|yi, fmax(0,i s+1):i) | {z } The conditional distributions are the primary objects of our study we will define them by constructing a particular Gaussian process prior over all variables, then identifying the appropriate (Gaussian) conditional distribution. Note that a simple modification to the decomposition (2) allows the same set-up to generate an (s + 1)-step Adams-Moulton iterator3 the implicit multistep method where yi+1 depends in addition on fi+1. At various stages of this paper this extension is noted but omitted for reasons of space the collected results are given in Appendix C. 1The notation y0:N denotes the vector (y0, . . . , y N), and analogously t0:N, f0:N etc. 2We argue that the connection to some specific deterministic method is a desirable feature, since it aids interpretability and allows much of the well-developed theory of IVP solvers to be inherited by the probabilistic solver. This is a particular strength of the formulation in [4] which was lacking in all previous works. 3The convention is that the number of steps is equal to the total number of derivative evaluations used in each iteration, hence the s-step AB and (s + 1)-step AM methods both go equally far back . Linear multistep methods We give a very short summary of Adams family LMMs and their conventional derivation via interpolating polynomials. For a fuller treatment of this well-studied topic we refer the reader to the comprehensive references [10, 11, 12]. Using the usual notation we write yi for the numerical estimate of the true solution y(ti), and fi for the estimate of f(ti) y0(ti). The classic s-step Adams-Bashforth method calculates yi+1 by constructing the unique polynomial Pi(!) 2 Ps 1 interpolating the points {fi j}s 1 j=0. This is given by Lagrange s method as j (!)fi j 0:s 1 ! ti k ti j ti k j (!) are known as Lagrange polynomials, have the property that 0:s 1 p (ti q) = δpq, and form a basis for the space Ps 1 known as the Lagrange basis. The Adams-Bashforth iteration then proceeds by writing the integral version of (1) as y(ti+1) y(ti) ti f(y, t) dt and approximating the function under the integral by the extrapolated interpolating polynomial to give Pi(!) d! = h j,s fi j (7) where h = ti+1 ti and the βAB j,s h 1 R h j (!) d! are the Adams-Bashforth coefficients for order s, all independent of h and summing to 1. Note that if f is a polynomial of degree s 1 (so y(t) is a polynomial of degree s) this procedure will give the next solution value exactly. Otherwise the extrapolation error in fi+1 is of order O(hs) and in yi+1 (after an integration) is of order O(hs+1). So the local truncation error is O(hs+1) and the global error O(hs) [10]. Adams-Moulton methods are similar except that the polynomial Qi(!) 2 Ps interpolates the s + 1 points {fi j}s 1 j= 1. The resulting equation analogous to (7) is thus an implicit one, with the unknown yi+1 appearing on both sides. Typically AM methods are used in conjunction with an AB method of one order lower, in a predictor-corrector arrangement. Here, a predictor value y i+1 is calculated using an AB step; this is then used to estimate f i+1); and finally an AM step uses this value to calculate yi+1. We again refer the reader to Appendix C for details of the AM construction. 2 Derivation of Adams family LMMs via Gaussian processes We now consider a formulation of the Adams-Bashforth family starting from a Gaussian process framework and then present a probabilistic extension. We fix a joint Gaussian process prior over yi+1, yi, fi, fi 1, . . . , fi s+1 as follows. We define two vectors of functions φ(!) and Φ(!) in terms of the Lagrange polynomials 0:s 1 j (!) defined in (6) as 0 (!) 0:s 1 1 (!) . . . 0:s 1 0 (!) d! . . . The elements (excluding the first) of φ(!) form a basis for Ps 1 and the elements of Φ(!) form a basis for Ps. The initial 0 in φ(!) is necessary to make the dimensions of the two vectors equal, so we can correctly define products such as Φ(!)T φ(!) which will be required later. The first element of Φ(!) can be any non-zero constant C; the analysis later is unchanged and we therefore take C = 1. Since we will solely be interested in values of the argument ! corresponding to discrete equispaced time-steps tj tj 1 = h indexed relative to the current time-point ti = 0, we will make our notation more concise by writing φi+k for φ(ti+k), and similarly Φi+k for Φ(ti+k). We now use these vectors of basis functions to define a joint Gaussian process prior as follows: B B B B B B B B @ C C C C C C C C A B B B B B B B B @ C C C C C C C C A B B B B B B B B @ i+1φi s+1 ΦT i φi s+1 φT i φi . . . φT i φi s+1 φT i 1φi . . . φT i 1φi s+1 ... ... ... ... ... φT i s+1Φi+1 φT i s+1φi . . . φT i s+1φi s+1 C C C C C C C C A This construction works because y0 = f and differentiation is a linear operator; the rules for the transformation of the covariance elements is given in Section 9.4 of [13] and can easily be seen to correspond to the defined relationship between φ(!) and Φ(!). Recalling the decomposition in (5), we are interested in the conditional distribution p(yi+1|yi, fi s+1:i). This is also Gaussian, with mean and covariance given by the standard formulae for Gaussian conditioning. This construction now allows us to state the following result: Proposition 1. The conditional distribution p(yi+1|yi, fi s+1:i) under the Gaussian process prior given in (10), with covariance kernel basis functions as in (8) and (9), is a δ-measure concentrated on the s-step Adams-Bashforth predictor yi + h Ps 1 The proof of this proposition is given in Appendix A. Because of the natural probabilistic structure provided by the Gaussian process framework, we can augment the basis function vectors φ(!) and Φ(!) to generate a conditional distribution for yi+1 that has non-zero variance. By choosing a particular form for this augmented basis we can obtain an expression for the standard deviation of yi+1 that is exactly equal to the leading-order local truncation error of the corresponding deterministic method. We will expand the vectors φ(!) and Φ(!) by one component, chosen so that the new vector comprises elements that span a polynomial space of order one greater than before. Define the augmented bases φ+(!) and Φ+(!) as 0 (!) 0:s 1 1 (!) . . . 0:s 1 s 1 (!) hs 1:s 1 0 (!) d! . . . The additional term at the end of φ+(!) is the polynomial of order s which arises from interpolating f at s + 1 points (with the additional point at ti+1) and choosing the basis function corresponding to the root at ti+1, scaled by hs with a positive constant whose role will be explained in the next section. The elements of these vectors span Ps and Ps+1 respectively. With this new basis we can give the following result: Proposition 2. The conditional distribution p(yi+1|yi, fi s+1:i) under the Gaussian process prior given in (10), with covariance kernel basis functions as in (11) and (12), is Gaussian with mean equal to the s-step Adams-Bashforth predictor yi + h Ps 1 j,s fi j and, setting = y(s+1)( ) for some 2 (ti s+1, ti+1), standard deviation equal to its local truncation error. The proof is given in Appendix B. In order to de-mystify the construction, we now exhibit a concrete example for the case s = 3. The conditional distribution of interest is p(yi+1|yi, fi, fi 1, fi 2) p(yi+1|yi, fi:i 2). In the deterministic case, the vectors of basis functions become 0 (! + h)(! + 2h) 2!2 + 9h! + h2# !2 (! + 3h) !2 (2! + 3h) and simple calculations give that E(yi+1|yi, fi:i 2) = yi + h Var(yi+1|yi, fi:i 2) = 0 The probabilistic version follows by setting 0 (! + h)(! + 2h) !(! + h)(! + 2h) 2!2 + 9h! + h2# !2 (x + 3h) !2 (2! + 3h) !2(! + 2h)2 and further calculation shows that E(yi+1|yi, fi:i 2) = yi + h Var(yi+1|yi, fi:i 2) = An entirely analogous argument can be shown to reproduce and probabilistically extend the implicit Adams-Moulton scheme. The Gaussian process prior now includes fi+1 as an additional variable and the correlation structure and vectors of basis functions are modified accordingly. The required modifications are given in Appendix C and a explicit derivation for the 4-step AM method is given in Appendix D. 2.1 The role of Replacing in (11) by y(s+1)( ), with 2 (ti s+1, ti+1), makes the variance of the integrator coincide exactly with the local truncation error of the underlying deterministic method.4 This is of course of limited utility unless higher derivatives of y(t) are available, and even if they are, is itself unknowable in general. However it is possible to estimate the integrator variance in a systematic way by using backward difference approximations [14] to the required derivative at ti+1. We show this by expanding the s-step Adams-Bashforth iterator as yi+1 = yi + h Ps 1 j,s fi j + hs+1CAB s y(s+1)( ) 2 [ti s+1, ti+1] = yi + h Ps 1 j,s fi j + hs+1CAB s y(s+1)(ti+1) + O(hs+2) = yi + h Ps 1 j,s fi j + hs+1CAB s f (s)(ti+1) + O(hs+2) since y0 = f = yi + h Ps 1 j,s fi j + hs+1CAB k=0 δk,s 1+pfi k + O(hp) = yi + h Ps 1 j,s fi j + h CAB k=0 δk,sfi k + O(hs+2) if we set p = 1 (13) ,s are the set of coefficients and CAB s the local truncation error constant for the s-step Adams-Bashforth method, and δ ,s 1+p are the set of backward difference coefficients for estimating the sth derivative of f to order O(hp) [14]. In other words, the constant can be substituted with h s Ps k=0 δk,sfi k, using already available function values and to adequate order. It is worth noting that collecting the coefficients βAB ,s and δ ,s results in an expression equivalent to the Adams-Bashforth method of order s + 1 and therefore, this procedure is in effect employing two integrators of different orders and estimating the truncation error from the difference of the two.5 This principle is similar to the classical Milne Device [12], which pairs an AB and and AM iterator to achieve the same thing. Using the Milne Device to generate a value for the error variance is also straightforward within our framework, but requires two evaluations of f at each iteration (one of which immediately goes to waste) instead of the approach presented here, which only requires one. 4We do not claim that this is the only possible way of modelling the numerical error in the solver. The question of how to do this accurately is an open problem in general, and is particularly challenging in the multi-dimensional case. In many real world problems different noise scales will be appropriate for different dimensions and especially in hierarchical models arising from higher-order ODEs non-Gaussian noise is to be expected. That said, the Gaussian assumption as a first order approximation for numerical error is present in virtually all work on this subject and goes all the way back to [8]. We adopt this premise throughout, whilst noting this interesting unresolved issue. 5An explicit derivation of this for s = 3 is given in Appendix E. 3 Convergence of the probabilistic Adams-Bashforth integrator We now give the main result of our paper, which demonstrates that the convergence properties of the probabilistic Adams-Bashforth integrator match those of its deterministic counterpart. Theorem 3. Consider the s-step deterministic Adams-Bashforth integrator given in Proposition 1, which is of order s. Then the probabilistic integrator constructed in Proposition 2 has the same mean square error as its deterministic counterpart. In particular max 0 kh T E|Yk yk|2 Kh2s where Yk y(tk) denotes the true solution, yk the numerical solution, and K is a positive real number depending on T but independent of h. The proof of this theorem is given in Appendix F, and follows a similar line of reasoning to that given for a one-step probabilistic Euler integrator in [4]. In particular, we deduce the convergence of the algorithm by extrapolating from the local error. The additional complexity arises due to the presence of the stochastic part, which means we cannot rely directly on the theory of difference equations and the representations of their solutions. Instead, following [15], we rewrite the defining s-step recurrence equation as a one-step recurrence equation in a higher dimensional space. 4 Implementation We now have an implementable algorithm for an s-step probabilistic Adams-Bashforth integrator. Firstly, an accurate initialisation is required for the first s iterations this can be achieved with, for example, a Runge-Kutta method of sufficiently high order.6 Secondly, at iteration i, the preceding s stored function evaluations are used to find the posterior mean and variance of yi+1. The integrator then advances by generating a realisation of the posterior measure derived in Proposition 2. Following [1], a Monte Carlo repetition of this procedure with different random seeds can then be used as an effective way of generating propagated uncertainty estimates at any time 0 < T < 1. 4.1 Example Chua circuit The Chua circuit [16] is the simplest electronic circuit that exhibits chaotic behaviour, and has been the subject of extensive study in both the mathematics and electronics communities for over 30 years. Readers interested in this rich topic are directed to [17] and the references therein. The defining characteristic of chaotic systems is their unpredictable long-term sensitivity to tiny changes in initial conditions, which also manifests itself in the sudden amplification of error introduced by any numerical scheme. It is therefore of interest to understand the limitations of a given numerical method applied to such a problem namely the point at which the solution can no longer be taken to be a meaningful approximation of the ground truth. Probabilistic integrators allow us to do this in a natural way [1]. The Chua system is given by x0 = (y (1 + h1)x h3x3), y0 = x y + z, z0 = βy γz. We use parameter values = 1.4157, β = 0.02944201, γ = 0.322673579, h1 = 0.0197557699, h3 = 0.0609273571 and initial conditions x0 = 0, y0 = 0.003, z0 = 0.005. This particular choice is taken from Attractor CE96 in [18]. Using the probabilistic version of the Adams-Bashforth integrator with s > 1, it is possible to delay the point at which numerical path diverges from the truth, with effectively no additional evaluations of f required compared to the one-step method. This is demonstrated in Figure 1. Our approach is therefore able to combine the benefits of classical higher-order methods with the additional insight into solution uncertainty provided by a probabilistic method. 4.2 Example Lotka-Volterra model We now apply the probabilistic integrator to a simple periodic predator-prey model given by the system x0 = x βxy, y0 = γxy δy for parameters = 1, β = 0.3, γ = 1 and δ = 0.7. We demonstrate the convergence behaviour stated in Theorem 3 empirically. 6We use a (packaged) adaptive Runge-Kutta-Fehlberg solver of 7th order with 8th order error control. Figure 1: Time series for the x-component in the Chua circuit model described in Section 4.1, solved 20 times for 0 t 1000 using an s-step probabilistic AB integrator with s = 1 (top), s = 3 (middle), s = 5 (bottom). Step-size remains h = 0.01 throughout. Wall-clock time for each simulation was close to constant ( 10 per cent the difference primarily accounted for by the RKF initialisation procedure). The left-hand plot in Figure 2 shows the sample mean of the absolute error of 200 realisations of the probabilistic integrator plotted against step-size, on a log-log scale. The differing orders of convergence of the probabilistic integrators are easily deduced from the slopes of the lines shown. The right-hand plot shows the actual error value (no logarithm or absolute value taken) of the same 200 realisations, plotted individually against step-size. This plot shows that the error in the one-step integrator is consistently positive, whereas for twoand three-step integrators is approximately centred around 0. (This is also visible with the same data if the plot is zoomed to more closely examine the range with small h.) Though this phenomenon can be expected to be somewhat problem-dependent, it is certainly an interesting observation which may have implications for bias reduction in a Bayesian inverse problem setting. 4 3 2 log10 h log10 E |Error| 4 3 2 log10 h No. of steps (s) Figure 2: Empirical error analysis for the x-component of 200 realisations of the probabilistic AB integrator as applied to the Lotka-Volterra model described in Section 4.2. The left-hand plot shows the convergence rates for AB integrators of orders 1-5, while the right-hand plot shows the distribution of error around zero for integrators of orders 1-3. 5 Conclusion We have given a derivation of the Adams-Bashforth and Adams-Moulton families of linear multistep ODE integrators, making use of a Gaussian process framework, which we then extend to develop their probabilistic counterparts. We have shown that the derived family of probabilistic integrators result in a posterior mean at each step that exactly coincides with the corresponding deterministic integrator, with the posterior standard deviation equal to the deterministic method s local truncation error. We have given the general forms of the construction of these new integrators to arbitrary order. Furthermore, we have investigated their theoretical properties and provided a rigorous proof of their rates of convergence, Finally we have demonstrated the use and computational efficiency of probabilistic Adams-Bashforth methods by implementing the solvers up to fifth order and providing example solutions of a chaotic system, and well as empirically verifying the convergence rates in a Lotka-Voltera model. We hope the ideas presented here will add to the arsenal of any practitioner who uses numerical methods in their scientific analyses, and contributes a further tool in the emerging field of probabilistic numerical methods. [1] O. A. CHKREBTII, D. A. CAMPBELL, B. CALDERHEAD, and M. A. GIROLAMI. Bayesian Solution Uncertainty Quantification for Differential Equations. Bayesian Analysis, 2016. [2] P. HENNIG and S. HAUBERG. Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics. In, Proc. of the 17th int. Conf. on Artificial Intelligence and Statistics (AISTATS). Vol. 33. JMLR, W&CP, 2014. [3] M. SCHOBER, D. K. DUVENAUD, and P. HENNIG. Probabilistic ODE Solvers with Runge-Kutta Means. In Z. GHAHRAMANI, M. WELLING, C. CORTES, N. D. LAWRENCE, and K. Q. WEINBERGER, editors, Advances in Neural Information Processing Systems 27, pp. 739 747. Curran Associates, Inc., 2014. [4] P. R. CONRAD, M. GIROLAMI, S. SÄRKKÄ, A. STUART, and K. ZYGALAKIS. Statistical Analysis of Differential Equations: Introducing Probability Measures on Numerical Solutions. Statistics and Computing, 2016. [5] M. C. KENNEDY and A. O HAGAN. Bayesian Calibration of Computer Models. Journal of the Royal Statistical Society: Series B, 63(3):425 464, 2001. [6] P. DIACONIS. Bayesian Numerical Analysis. In J. BERGER and S. GUPTA, editors, Statistical Decision Theory and Related Topics IV. Vol. 1, pp. 163 175. Springer, 1988. [7] P. HENNIG, M. A. OSBORNE, and M. GIROLAMI. Probabilistic Numerics and Uncertainty in Computations. Proc. R. Soc. A, 471(2179):20150142, 2015. [8] J. SKILLING. Bayesian Numerical Analysis. In J. W. T. GRANDY and P. W. MILONNI, editors, Physics and Probability, pp. 207 222. Cambridge University Press, 1993. [9] M. GIROLAMI. Bayesian Inference for Differential Equations. Theor. Comp. Sci., 408(1):4 16, 2008. [10] A. ISERLES. A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press, 2nd ed., 2008. [11] E. HAIRER, S. NØRSETT, and G. WANNER. Solving Ordinary Differential Equations I: Nonstiff Problems. Of Springer Series in Computational Mathematics. Springer, 2008. [12] J. BUTCHER. Numerical Methods for Ordinary Differential Equations: Second Edition. Wiley, 2008. [13] C. RASMUSSEN and C. WILLIAMS. Gaussian Processes for Machine Learning. University Press Group Limited, 2006. [14] B. FORNBERG. Generation of Finite Difference Formulas on Arbitrarily Spaced Grids. Mathematics of Computation, 51(184):699 706, 1988. [15] E. BUCKWAR and R. WINKLER. Multistep Methods for SDEs and Their Application to Problems with Small Noise. SIAM J. Numer. Anal., 44(2):779 803, 2006. [16] L. O. CHUA. The Genesis of Chua s Circuit. Archiv für Elektronik und Übertragungstechnik, 46(4):250 257, 1992. [17] L. O. CHUA. Chua Circuit. Scholarpedia, 2(10):1488, 2007. [18] E. BILOTTA and P. PANTANO. A Gallery of Chua Attractors. World Scientific, 2008. KZ was partially supported by a grant from the Simons Foundation and by the Alan Turing Institute under the EPSRC grant EP/N510129/1. Part of this work was done during the author s stay at the Newton Institute for the programme Stochastic Dynamical Systems in Biology: Numerical Methods and Applications.