NDSolve Method Plugin Framework

Introduction

The control mechanisms set up for NDSolve enable you to define your own numerical integration algorithms and use them as specifications for the Method option of NDSolve.

NDSolve accesses its numerical algorithms and the information it needs from them in an object-oriented manner. At each step of a numerical integration, NDSolve keeps the method in a form so that it can keep private data as needed.

AlgorithmIdentifier [data]an algorithm object that contains any data that a particular numerical ODE integration algorithm may need to use; the data is effectively private to the algorithm; AlgorithmIdentifier should be a Wolfram System symbol, and the algorithm is accessed from NDSolve by using the option Method->AlgorithmIdentifier

The structure for method data used in NDSolve.

NDSolve does not access the data associated with an algorithm directly, so you can keep the information needed in any form that is convenient or efficient to use. The algorithm and information that might be saved in its private data are accessed only through method functions of the algorithm object. The key interaction between a method algorithm object and NDSolve is through a "Step" function.

obj["Step"[inputs]]attempt to take a single time step using the numerical algorithm

Required "Step" function used by NDSolve.

The following is a list of important properties of the method and its "Step" function that should be defined.

obj["StepInputs"]the list of input arguments that the step function should be given
obj["StepOutput"]the list of outputs that the step function will return
obj["StepMode"]set to the step mode
obj["DifferenceOrder"]set to the current asymptotic difference order of the algorithm

Properties that should be defined for using a method algorithm object obj in NDSolve.

The table below gives specifications that can be used for step inputs and/or outputs.

short name
full name    
component
"F""Function"the function given the right-hand side of the ODE system or the residual for a DAE system
"T""Time"current time
"H""TimeStep"time step
"X""DependentVariables"current values of dependent variables other than discrete
"XI""DependentVariablesIncrement" such that gives the values of the dependent variables after the step; if is given as $Failed, then the step is rejected and will be recomputed (the time step returned should be smaller)
"XP""TemporalDerivatives"the time derivatives of the dependent variables
"D""DiscreteVariables"the discrete variables that take on a continuous range of values
"ID""IndexedDiscreteVariables"the discrete variables that take on a finite set of values
"P""Parameters"the parameters other than ones for which sensitivity is being computed
"SP""SensitivityParameters"the parameters for which sensitivity is being computed
"SD""SolutionData"the solution data list
"Obj""MethodObject"a new setting for obj that will be used for the next step
"State""StateData"the NDSolve`StateData object

Step input and output specifications.

The solution data is the list of values with components relating to different variable types. The solution data in the active direction can be obtained from an NDSolve`StateData object state using state@"SolutionData"["Active"]. Many of the specifications listed above correspond directly to solution data components.

For input, the specification for the function should be given with the arguments it will be given. These arguments should be solution data components. All can be used to indicate that all arguments used by NDSolve should be expected.

For output, if the method only returns some of the values in some cases (e.g. the method object does not change every step) then you can use Null for any output except the increment to indicate that it should not be used.

Classical RungeKutta

Here is an example of how to set up and access a simple numerical algorithm.

This defines a method function to take a single step toward integrating an ODE using the classical fourth-order RungeKutta method. Since the method is so simple, it is not necessary to save any private data:
This specifies for NDSolve the inputs and outputs for the step function. The ___ template is used to make this work for all instances of CRK4 method objects:
This is an equivalent specification using the full names:
This defines a method function so that NDSolve can obtain the proper difference order to use for the method:
This defines a method function for the step mode so that NDSolve will know how to control time steps. This algorithm method does not have any step control, so you define the step mode to be Fixed:
This integrates the simple harmonic oscillator equation with fixed step size:

Generally using a fixed step size is less efficient than allowing the step size to vary with the local difficulty of the integration. Modern explicit RungeKutta methods (accessed in NDSolve with Method->"ExplicitRungeKutta") have a so-called embedded error estimator that makes it possible to very efficiently determine appropriate step sizes. An alternative is to use built-in step controller methods that use extrapolation. The method "DoubleStep" uses an extrapolation based on integrating a time step with a single step of size and two steps of size . The method "Extrapolation" does a more sophisticated extrapolation and modifies the degree of extrapolation automatically as the integration is performed, but is generally used with base methods of difference orders 1 and 2.

This integrates the simple harmonic oscillator using the classical fourth-order RungeKutta method with steps controlled by using the "DoubleStep" method:
This makes a plot comparing the error in the computed solutions at the step ends. The error for the "DoubleStep" method is shown in blue:

The fixed step size ended up with smaller overall error mostly because the steps are so much smaller; it required more than three times as many steps. For a problem where the local solution structure changes more significantly, the difference can be even greater.

A facility for stiffness detection is described within "DoubleStep Method for NDSolve".

For more sophisticated methods, it may be necessary or more efficient to set up some data for the method to use. When NDSolve uses a particular numerical algorithm for the first time, it calls an initialization function. You can define rules for the initialization that will set up appropriate data for your method.

NDSolve`InitializeMethod[Algorithm Identifier,stepmode,sd,f,state,Algorithm Options]
the expression that NDSolve evaluates for initialization when it first uses an algorithm for a particular integration where stepmode is either Automatic or Fixed depending on whether your method is expected to be called within the framework of a step controller or another method or not; sd is the current solution data, state is the NDSolveState object used by NDSolve, and Algorithm Options is a list that contains any options given specifically with the specification to use the particular algorithm, for example, {opts} in Method->{Algorithm Identifier,opts}
Algorithm Identifier/:NDSolve`InitializeMethod[Algorithm Identifier,stepmode_,sd_,f_NumericalFunction,state_NDSolveState,{opts___?OptionQ}]:=initialization
definition of the initialization so that the rule is associated with the algorithm, and initialization should return an algorithm object in the form Algorithm Identifier[data]

Initializing a method from NDSolve.

NDSolve`InitializeMethod is protected, so to attach rules to it, you would need to unprotect it first. It is better to keep the rules associated with your method. A tidy way to do this is to make the initialization definition using TagSet as shown earlier.

As an example, suppose you want to redefine the RungeKutta method shown earlier so that instead of using the exact coefficients 2, 1/2, and 1/6, numerical values with the appropriate precision are used instead to make the computation slightly faster.

This defines a method function to take a single step toward integrating an ODE using the classical fourth-order RungeKutta method using saved numerical values for the required coefficients:
This defines a rule that initializes the algorithm object with the data to be used later:

Saving the numerical values of the numbers gives between 5 and 10 percent speedup for a longer integration using "DoubleStep".

CrankNicolson

This is an example of how to set up an implicit method. To solve the system of ODEs , the scheme for a time step of size is , where and . In general, for nonlinear , the equations need to be solved with Newton iteration.

Instead of using a true Newton iteration where the Jacobian is updated at each step, the Jacobian will be computed once and its LU decomposition will be used repeatedly to solve the linear equations. As long as time steps are not too large, this will generally converge as well as the full Newton method and each iteration beyond the first is substantially cheaper.

To avoid possible problems with the Newton iteration not converging, the method will be set up to take method options that control the maximum number of iterations and the convergence tolerance.

Define options and method initialization for the CrankNicolson method:
Define the step function:
This specifies for NDSolve the inputs, outputs, difference order, and step mode for the CrankNicolson method:
Here is how the method can be called with option settings:

For a linear problem, the solution is given by the first iteration. However, for a nonlinear problem, convergence may take longer.

This fails because the nonlinear problem cannot be expected to converge in 1 iteration with such large time steps:
With more iterations the solution is successful:

Since the CrankNicolson method is usually considered regarding solving PDEs, here is an example of the method solving the wave equation. Since this is a linear equation, convergence occurs in 1 iteration so the method is quite fast.

This uses the CrankNicolson method to solve the wave equation with periodic boundary conditions:
A density plot of the solution:

Invoking Another Method

One of the strengths of the NDSolve framework comes from the capability to nest methods in various ways.

NDSolve`InitializeSubmethod[caller,submspec,stepmode,sd,f,state]initializes the submethod specified by Method->submspec in the initialization of the method caller; stepmode, sd, f, and state are as in NDSolve`InitializeMethod
NDSolve`InvokeMethod[submobj,f,h,sd,state]invoke the submethod with method object submobj with function f, time step h, solution data sd, and NDSolve`StateData object state, returning a list {hnew,sdnew,submobjnew} where hnew is the new time step, sdnew is the solution data at the step end with unchanged components set to Null, and submobjnew is the new submethod object

Using submethods.

NDSolve`InitializeSubmethod effectively calls NDSolve`InitializeMethod after separating out the method name from any suboptions that are given. If successful, it returns the method object that you can later use in NDSolve`InvokeMethod.

NDSolve`InvokeMethod effectively constructs a "Step" function for the submethod. Since the arguments for the function are specified by the "StepInput" specification for the submethod, the function f given to NDSolve`InvokeMethod should have all the arguments used by NDSolve and can be specified in the caller "StepInput" specification by "Function"[All]. Note that for built-in methods, there may not be an explicit "Step" function and NDSolve`InvokeMethod handles these using efficient internal code. The result returned by NDSolve`InvokeMethod is always constructed from the submethod's actual output to match a "StepOutput" of {"TimeStep","SolutionData","MethodObject"}.

Here is an example that simply calls a submethod to provide simple monitoring of the steps and difference order.

Define the method options and the initialization for the method:
The step function for the method simply invokes the submethod and applies the monitor function to the result and the difference order:
Set the inputs and outputs:
Derive all other properties from the submethod:
Use the method:

For a more complicated integration the method can be used with Reap to collect data.

Collect the logarithm of the step size and difference order as a function of time for integrating the Van der Pol oscillator and plot the solution:
Show the logarithm of the step size and the difference order for steps near the sudden change:

Adams Methods

In terms of the NDSolve framework, it is not really any more difficult to write an algorithm that controls steps automatically. However, the requirements for estimating error and determining an appropriate step size usually make this much more difficult from both the mathematical and programming standpoints. The following example is a partial adaptation of the Fortran DEABM code of Shampine and Watts to fit into the NDSolve framework. The algorithm adaptively chooses both step size and order based on criteria described in [SG75].

The first stage is to define the coefficients. The integration method uses variable step-size coefficients. Given a sequence of step sizes , where is the current step to take, the coefficients for the method with AdamsBashforth predictor of order and AdamsMoulton corrector of order , such that

where the are the divided differences.

This defines a function that computes the coefficients and , along with , that are used in error estimation. The formulas are from [HNW93] and use essentially the same notation:

hlist is the list of step sizes from past steps. The constant-coefficient Adams coefficients can be computed once, and much more easily. Since the constant step size AdamsMoulton coefficients are used in error prediction for changing the method order, it makes sense to define them once with rules that save the values.

This defines a function that computes and saves the values of the constant step size AdamsMoulton coefficients:

The next stage is to set up a data structure that will keep the necessary information between steps and define how that data should be initialized. The key information that needs to be saved is the list of past step sizes, hlist, and the divided differences, . Since the method does the error estimation, it needs to get the correct norm to use from the NDSolve`StateData object. Some other data such as precision is saved for optimization and convenience.

This defines a rule for initializing the AdamsBM method from NDSolve:

hlist is initialized to {} since at initialization time there have been no steps. is initialized to the derivative of the solution vector at the initial condition since the 0 th divided difference is just the function value. Note that is a matrix. The third element, which is initialized to a zero vector, is used for determining the best order for the next step. It is effectively an additional divided difference. The use of the other parts of the data is clarified in the definition of the stepping function.

The initialization is also set up to get the value of an option that can be used to limit the maximum order of the method to use. In the code DEABM, this is limited to 12, because this is a practical limit for machine-precision calculations. However, in the Wolfram System, computations can be done in higher precision where higher-order methods may be of significant advantage, so there is no good reason for an absolute limit of this sort. Thus, you set the default of the option to be .

This sets the default for the MaxDifferenceOrder option of the AdamsBM method:

Before proceeding to the more complicated "Step" method functions, it makes sense to define the simple "StepMode" and "DifferenceOrder" method functions.

This defines the step mode for the AdamsBM method to always be Automatic. This means that it cannot be called from step controller methods that request fixed step sizes of possibly varying sizes:
This defines the difference order for the AdamsBM method. This varies with the number of past values saved:

Finally, here is the definition of the "Step" function. The actual process of taking a step is only a few lines. The rest of the code handles the automatic step size and order selection following very closely the DEABM code of Shampine and Watts.

This defines the "Step" method function for AdamsBM that returns step data according to the templates described earlier:

There are a few deviations from DEABM in the code here. The most significant is that coefficients are recomputed at each step, whereas DEABM computes only those that need updating. This modification was made to keep the code simpler, but does incur a clear performance loss, particularly for small to moderately sized systems. A second significant modification is that much of the code for limiting rejected steps is left to NDSolve, so there are no checks in this code to see if the step size is too small or the tolerances are too large. The stiffness detection heuristic has also been left out. The order and step-size determination code has been modularized into separate functions.

This defines a function that constructs error estimates for , , and and determines if the order should be lowered or not:
This defines a function that determines the best order to use after a successful step:
This defines a function that determines the best step size to use after a successful step of size :

Once these definitions are entered, you can access the method in NDSolve by simply using Method->AdamsBM.

This solves the harmonic oscillator equation with the Adams method defined earlier:
This shows the error of the computed solution. It is apparent that the error is kept within reasonable bounds. Note that after the first few points, the step size has been increased:

Where this method has the potential to outperform some of the built-in methods is with high-precision computations with strict tolerances. This is because the built-in methods are adapted from codes with the restriction to order 12.

A lot of time is required for coefficient computation.

This is not using as high an order as might be expected.

In any case, about half the time is spent generating coefficients, so to make it better, you need to figure out the coefficient update.