Numerical solution of ordinary differential equations: Error estimation and step size control#
As always, we start by import some important Python modules.
import numpy as np
from numpy import pi
from numpy.linalg import solve, norm
import matplotlib.pyplot as plt
# Do a pretty print of the tables using panda
import pandas as pd
from IPython.display import display
# Use a funny plotting style
# plt.xkcd()
newparams = {'figure.figsize': (6.0, 6.0), 'axes.grid': True,
'lines.markersize': 8, 'lines.linewidth': 2,
'font.size': 14}
plt.rcParams.update(newparams)
This goal of this section is to develop Runge Kutta methods with automatic adaptive time-step selection.
Adaptive time-step selection aims to dynamically adjust the step size during the numerical integration process to balance accuracy and computational efficiency. By increasing the step size when the solution varies slowly and decreasing it when the solution changes rapidly, adaptive methods ensure that the local error remains within a specified tolerance. This approach not only enhances the precision of the solution but also optimizes the computational resources, making it particularly valuable for solving complex and stiff ODEs where fixed step sizes may either fail to capture important dynamics or result in unnecessary computations.
In this notebook, we will again focus explicit Runge-Kutta methods
TODO
Add solution of three-body problem as an motivational example. Will be done after the submission of project 2 as project 2 requires to implement an adaptive RK method from scratch.
Error estimation#
Given two methods, one of order \(p\) and the other of order \(p+1\) or higher. Assume we have reached a point \((t_n,\mathbf{y}_n)\). One step forward with each of these methods can be written as
Let \(\mathbf{y}(t_{n+1};t_n,\mathbf{y}_n)\) be the exact solution of the ODE through \((t_n,\mathbf{y}_n)\). We would like to find an estimate for consistency error or the local error \(\mathbf{l}_{n+1}\), that is, the error in one step starting from \((t_n, \mathbf{y}_n)\),
As we have already seen, the local error is determined by finding the power series in \(\tau\) for both the exact and numerical solutions. The local error is of order \(p\) if the lowest order terms in the series, where the exact and numerical solutions differ, are of order \(p+1\). Therefore, the local errors of the two methods are:
where \(\Psi(t_n,y_n)\) is a term consisting of method parameters and differentials of \(\mathbf{f}\) and \(\dotsc\) contains all the terms of the series of order \(p+2\) or higher. Taking the difference gives
Assume now that \(\tau\) is small, such that the principal error term \({\boldsymbol\Psi(t_n,y_n)}\tau^{p+1}\) dominates the error series. Then a reasonable approximation to the unknown local error \(\mathbf{l}_{n+1}\) is the local error estimate \(\mathbf{le}_{n+1}\):
Stepsize control#
The next step is to control the local error, that is, choose the step size so that \(\|\mathbf{le}_{n+1}\| \leq \text{Tol}\) for some given tolerance Tol, and for some chosen norm \(\|\cdot\|\).
Essentially: Given \(t_n, \mathbf{y}_n\) and a step size \(\tau_n\).
Do one step with the method of choice, and find an error estimate \(\mathbf{le}_{n+1}\).
if \(\|\mathbf{le}\|_{n+1} < \text{Tol}\)
Accept the solution \(t_{n+1}, \mathbf{y}_{n+1}\).
If possible, increase the step size for the next step.
else
Repeat the step from \((t_n,\mathbf{y}_n)\) with a reduced step size \(\tau_{n}\).
In both cases, the step size will change. But how? From the discussion above, we have that
where \(\mathbf{le}_{n+1}\) is the error estimate we can compute, \(D\) is some unknown quantity, which we assume almost constant from one step to the next.
What we want is a step size \(\tau_{new}\) such that
From these two approximations we get:
That is, if the current step \(\tau_n\) was rejected, we try a new step \(\tau _{new}\) with this approximation. However, it is still possible that this new step will be rejected as well. To avoid too many rejected steps, it is therefore common to be a bit restrictive when choosing the new step size, so the following is used in practice:
where the pessimist factor \(P<1\) is some constant, normally chosen between 0.5 and 0.95.
Implementation#
We have all the bits and pieces for constructing an adaptive ODE solver based on Euler’s and Heuns’s methods. There are still some practical aspects to consider:
The combination of the two methods, can be written as
Even if the error estimate is derived for the lower order method, in this case Euler’s method, it is common to advance the solution with the higher order method, since the additional accuracy is for free.
Adjust the last step to be able to terminate the solutions exactly in \(T\).
To avoid infinite loops, add some stopping criteria. In the code below, there is a maximum number of allowed steps (rejected or accepted).
A popular class of Runge - Kutta methods with an error estimate consists of so-called embedded Runge - Kutta methods or Runge - Kutta pairs, and the coefficients can be written in a Butcher tableau as follows
A major advantage of such embedded RKMs is that we need to compute the the \(s\) stage derivatives \(k_i\) only once and can use them for both RKM! Remember that stage derivatives can be expensive to compute.
The order difference between the two different methods is soley determine by the use of different weights \(\{b_i\}_{i=1}^s\) and \(\{\widehat{b}_i\}_{i=1}^s\).
Since
\(\mathbf{y}_{n+1} = \mathbf{y}_n + \tau_n\sum_{i=1}^s b_i \mathbf{k}_i\)
\(\widehat{\mathbf{y}}_{n+1} = \mathbf{y}_n + \tau_n\sum_{i=1}^s \widehat{b}_i \mathbf{k}_i\)
the error estimate is simply given by
Recalling Euler and Heun,
and the Heun-Euler pair can be written as
A particular mention deserves also the classical 4-stage Runge-Kutta method from a previous notebook, which can be written as
See this list of Runge - Kutta methods for more. For the last one there exist also a embedded Runge-Kutta 4(3) variant due to Fehlberg:
Outlook. In your homework/project assignment, you will be asked to implement automatic adaptive time step selection
based on embedded Runge-Kutta methods. You can either develop those from scratch or start from the ExplicitRungeKutta
class
we presented earlier and incorporate code for error estimation and time step selection.