Application of differential equations with a small parameter for solving nonlinear transcendental and algebraic equations. Multi-step methods (Adams methods) Adams method detailed example solution

Adams was a 19th-century English astronomer and mathematician who worked extensively on celestial mechanics. When studying the trajectories of planets, he constantly had to numerically integrate the equations of their motion. Wanting to minimize the amount of computation, Adams developed one of the most economical methods for the numerical solution of differential equations, which we now turn to discuss.

Let be a solution to a differential equation. The derivative of this function satisfies the equality

Integrating it between two grid points, we obtain the relation

.

We cannot use this relation directly to transition in the process of solving the problem from th grid point to
-oh, since the function
we don't know. To take the next step, we need to approximately replace this function with a function that can be calculated. Let us describe how this problem is solved in the Adams method.

Let us assume that in the process of numerical solution of the problem we have brought the calculation to the point . As a result of the calculations carried out, we found out the values And
,
. Let's take some fixed integer
and construct an interpolation polynomial
-th degree, taking at points ,
values

,
.

It can be written using the Lagrange formula

,

Where
special polynomials of the form

which we have already discussed in the third chapter.

The main idea of ​​the Adams method is to calculate
use a formula of the type, approximately replacing the function in it
to interpolation polynomial
, compiled according to the results of previous calculations. This leads to the recurrence formula

,

.

Let us consider in more detail this scheme for the numerical solution of the Cauchy problem in the simplest cases
And
when technical difficulties do not obscure the transparent idea of ​​the method. At
to approximate a function
a polynomial of degree zero is used, i.e. a constant

.

In this case, the formula goes into the recurrent formula of the Euler method

,

providing first order accuracy. This result in itself is trivial. We have given it only to show that for the Adams method, as for the Runge-Kutta method, the starting point is the Euler scheme.

Let's move on to exploring the option
. In this case, to approximate the function
a first degree polynomial is used, constructed from the function values at two points
And
:

Substituting it into the formula and performing integration, we get

.

Let us note the following feature of the recurrent formula. To calculate the next value of the grid function
you need to know its values ​​at the two previous points And
. Thus, the formula starts working only from the second point. Calculate from it it is forbidden. This value of the solution to the difference problem has to be calculated by some other method, for example, the Runge-Kutta method.

The recurrence formula can be written as a difference equation

.

Let us calculate for it the error in approximation of the differential equation

Let's assume that the function
has continuous second derivatives in the domain of change of arguments that interests us, so that the solution to the problem
three times continuously differentiable. Let us write the Taylor expansions

Substituting them into the formula, we get

.

From here you can write an estimate

,

Where
- a constant that majorizes the third derivative of the function
:

,
.

We see that the difference equation of the Adams method corresponding to the case
, approximates the differential equation with second order accuracy with respect to . As with the Runge-Kutta method, this provides second order accuracy for the solution error
assuming that the value , which is calculated non-standardly, is calculated with second order accuracy.

The process of constructing more accurate diagrams can be continued by increasing
. At
a third-order accuracy scheme is obtained, with
- fourth, etc. The fourth-order scheme, as in the Runge-Kutta method, is the most commonly used, so we will briefly dwell on its derivation and discussion.

If we write an interpolation polynomial of the third degree
on a grid of four points ,
,
,
and carry out the integration, then the recurrence formula will take the form:

Let us give another form of writing this formula through the so-called finite differences

The first, second and third differences approximately correspond to the first, second and third derivatives of the function
. Equivalence of formulas is easy to check directly. A formula is sometimes more convenient for organizing the computational process and controlling accuracy.

The peculiarity of Adams's method is even more evident in the formula than in the formula. Here to calculate the next value
need to know the meanings at four previous points - ,
,
,
. Thus, the formula starts working only from the fourth point. Calculate from it ,,it is forbidden. These values ​​of the solution to the difference problem have to be calculated by another method, for example, the Runge-Kutta method.

Let's move on to discuss the accuracy of the circuit. If the function
has continuous fourth derivatives with respect to its arguments in the region of their variation that interests us, so the solution to the problem
five times continuously differentiable, then the difference equation approximates the differential equation to the fourth order of accuracy with respect to . The proof of this statement is carried out in the same way as for the second-order scheme, only now more terms need to be retained in the type expansions. The fourth order of accuracy when approximating the equation provides the fourth order of accuracy for the solution error
under the assumption that the initial values ​​for the Adams method ,,calculated with the same accuracy. They are calculated independently, and it is important that the initial stage of the computational process does not introduce such an error that will distort all subsequent results.

Task 5.

Construct a solution to the Cauchy problem on the segment
in increments
according to Adams II scheme
and fourth order. Compare the results of calculations with each other, with the results of calculations using the Runge-Kutta scheme and with the analytical solution of the problem.

The calculation results are shown in the fourth and fifth columns of Table 2. In accordance with the assignment, you need to compare the fourth column with the second and sixth, and the fifth with the third and sixth. Let us recall that the sixth column shows the analytical solution (53) of the problem under consideration, so a comparison with it allows us to judge the accuracy of the approximate solution using the Runge-Kutta scheme and the Adams scheme.

The calculation using the Adams scheme of the second order of accuracy begins with , fourth - from . Meaning in the fourth column, ,,in the fifth column were calculated according to the Runge-Kutta scheme of the corresponding order, so in the table they turn out to be the same with the corresponding data in the second and third columns. Comparison of the results of calculations performed by two methods with the analytical solution of the problem shows that their accuracy is approximately the same.

Let's compare fourth-order accuracy schemes in the Runge-Kutta and Adams methods from the point of view of organizing the computational process. To take one step using the Runge-Kutta method, you need to calculate the function
four times, but in the Adams method only once. At the three previous points the function
was already calculated in the previous steps and there is no need to calculate it again. This is the main advantage of the Adams method, which was especially highly valued in the pre-computer era.

We have already noted the main disadvantage of the Adams method: when applying it, the first steps have to be taken using another method, for example, using the Runge-Kutta method, and only after that can one proceed to the calculation using the Adams scheme. Thus, the program for solving the Cauchy problem using the Adams method must include as an element the program for the Runge-Kutta method for calculating the initial stage of the computational process.

There is another problem with this feature of the Adams method. When numerically integrating a differential equation, you often have to change the step . In the Runge-Kutta method this is not difficult, since each step is done independently of the previous one. In the Adams method the situation is different. Here you either need to initially program very complex calculation formulas with variable steps, or after each step change, re-calculate the first three points using the Runge-Kutta method. Only after this can you switch to a standard account using the Adams method. These shortcomings lead to the fact that today, in computer calculations, preference is often given to the more convenient Runge-Kutta method.

Adams method

Let for the Cauchy problem be found by some method (for example, the Euler or Runge-Kutta method) three consecutive values ​​of the desired function

Let's calculate the quantities, .

The Adams method allows you to find a solution to a problem - a function - in the form of a table of functions. The continuation of the resulting table of four points is carried out using the Adams extrapolation formula:

Then the refinement is carried out using the Adams interpolation formula:

Adams' method easily extends to systems differential equations. The error of the Adams method is of the same order as the Runge-Kutta method.

Application of differential equations with a small parameter for solving nonlinear transcendental and algebraic equations

Let some continuously differentiable function be given. It is required to solve a nonlinear or transcendental equation of the form

Equations encountered in practice cannot be solved by direct methods, so iterative methods are used to solve them. All iterative methods for solving transcendental and algebraic equations of the form (31) can be divided into two groups:

discrete solution schemes.

continuous solution schemes.

Discrete solution schemes were discussed above. Note that the main disadvantages of the above methods are:

dependence on initial conditions or on the interval where the root is found;

relatively low convergence speed;

nothing is said about the rules for moving from root to root of equation (31) if there are several of them.

When using continuous schemes to solve equation (31), the process of finding roots is carried out by solving the corresponding ordinary differential equation

Let be defined and monotonic at and there is a finite derivative. The problem of finding the roots of equation (31), which is a continuous analogue of the simple iteration method, can be considered as a limit in solving the Cauchy problem

if this limit exists. Let us denote by the solution of the Cauchy problem (33) the desired solution to equation (31). Then there must be an identity. Introducing the notation for the deviation and subtracting the last equation from (33), we have

Expanding in a Taylor series in the vicinity of a point while preserving the linear terms and substituting the resulting expression into (34), we obtain a differential equation in deviations, the solution of which has the form

We see that the condition for convergence to the root is the requirement, since in this case for, and, therefore. Assuming that is monotonic at, the last equation can be extended to the entire region considered above. Thus, the condition for applying the continuous scheme of the simple iteration method (33) is

Continuous solution schemes have a higher convergence rate and higher solution accuracy compared to the corresponding discrete circuits. But the problem of dependence on initial conditions and the absence of rules for moving from root to root in the case when equation (31) has more than one solution remains open.

As can be seen from the differential equation (33) and equation (31), the left side of the latter is replaced by the derivative. This replacement is a rough approximation of the solution to problem (33) to the solution to problem (31). This entails not only a large error in calculations, but also a decrease in the speed of calculations.

Let us rewrite equation (31) in the form

where is a small parameter, .

The transition from problem (31) to problem (37) is theoretically justified, since the integral curves, which are solutions to equation with a small parameter (37), pass through all solutions to equation (31). The problem of finding the roots of this equation by a continuous singular analogue of the method of simple iterations can be considered as the limit at and solution to the Cauchy problem of the form

if this limit exists.

Carrying out reasoning similar to the reasoning given above, we find that the solution to equation (37) at the point will have the form:

In this case, since, the convergence condition (36) will remain the same.

The resulting modification of classical solution schemes does not depend on the initial conditions and has a higher solution accuracy. To prove the faster rate of convergence, we assume that the use of iterative methods never gives an exact solution and introduce the accuracy of the solution. We denote the moments of finding solutions with accuracy using classical and modified methods as and. Using solutions (35) and (39), we write inequalities of the form

From the relations it is clear that and. Comparing the obtained values ​​and, we see that, i.e. The speed of convergence when solving a problem using modified methods is several times higher than using classical methods.

Currently, Adams' methods are among the most promising numerical integration methods for solving the Cauchy problem. It is proven that when applying multi-step Adams numerical methods to solve the Cauchy problem up to the 12th order, the stability region decreases. With a further increase in the order, the stability region, as well as the accuracy of the method, increases. In addition, with the same accuracy, multi-step methods at one integration step require fewer calculations of the right-hand sides of differential equations than in the Runge-Kutta methods. The advantages of Adams' methods include the fact that they easily change the integration step and the order of the method.

In practice, two types of Adams methods are widely used - explicit and implicit. Explicit methods are known as Adams-Bashforth methods, implicit methods are known as Adams-Moulton methods.

Let us consider the use of numerical methods to solve the Cauchy problem

When solving problem (2.1) using one-step methods, the value of yn+1 depends only on the information at the previous point xn. It can be assumed that greater accuracy can be achieved if information about several previous points xn, xn-1... xn-k is used. Multi-step methods are based on this idea.

Most multi-step methods arise from the following approach. If we substitute the exact solution y (x) into equation (2.1) and integrate the equation on the segment , we obtain:

Replacing the function f (x, y (x)) in formula (2.2) with the interpolation polynomial P (x), we obtain an approximate method

In order to construct the polynomial P (x), we assume that yn, yn-1... yn-k are approximations to the solution at points xn, xn-1... xn-k. We assume that the nodes xi are located uniformly with a step h. Then fi=f (xi, yi), (i=n, n-1.. n-k) are approximations to f (x, y (x)) at points xn, xn-1... xn-k.

As P (x), we take an interpolation polynomial of degree k satisfying the conditions

If we integrate this polynomial explicitly, we get the following method:

When k=0, the polynomial P (x) is a constant equal to fn, and formula (2.4) turns into the usual Euler method.

For k=1, the polynomial P (x) is a linear function passing through the points (xn-1, fn-1) and (xn, fn), i.e.

Integrating this polynomial from xn to xn+1, we obtain a two-step method

which uses information at two points xn and xn+1.

If k=2, then P(x) is a quadratic polynomial interpolating the data (xn-2, fn-2), (xn-1, fn-1) and (xn, fn). It can be shown that the corresponding method has the form

If k=3, then the corresponding method is given by the formula

For k=4 we have

Note that method (2.7) is a three-step method, (2.8) is a four-step method, and (2.9) is a five-step method. Formulas (2.6) - (2.9) are known as the Adams-Bashforth methods. Method (2.6) has second order accuracy, so it is called the second order Adams-Bashforth method. Similarly, methods (2.7), (2.8) and (2.9) are called third-, fourth- and fifth-order Adams-Bashforth methods, respectively.

Continuing this process, using an increasing number of previous points, as well as an interpolating polynomial of a higher degree, we obtain Adams-Bashforth methods of arbitrarily high order.

Multi-step methods introduce difficulties that do not arise with single-step methods. These difficulties become clear if, for example, we turn to the fifth-order Adams-Bashforth methods (2.9).

In problem (2.1) the initial value y0 is given, but with n=0, to calculate using formula (2.9), information is needed at points x-1, x-2, x-3, x-4, which is naturally missing. The usual way out of this situation is to use some one-step method of the same order of accuracy, such as the Runge-Kutta method, until enough values ​​are obtained for the multi-step method to work. Or you can use a one-step method on the first step, a two-step method on the second, and so on until all the starting values ​​are obtained. It is essential that these starting values ​​be calculated with the same degree of accuracy as the final method will work with. Since the starting methods have a lower order of accuracy, you have to count in smaller increments at the beginning and use more intermediate points.

The derivation of methods (2.6) - (2.9) is based on replacing the function f (x, y) with an interpolation polynomial P (x). It is known that there is a theorem that proves the existence and uniqueness of the interpolation polynomial. If the nodes x0, x1... xn are different, then for any f0, f1... fn there is a unique polynomial P (x) of degree not higher than n such that P (xi) =fi, i=0, 1,.. n.

Although the interpolation polynomial is unique, there are several ways to represent this polynomial. Lagrange polynomials are most often used, but they also turn out to be inconvenient if you need to add (or remove from it) any node to a data set. In this case, there is a different representation of the interpolation polynomial. This is Newton's representation

The polynomial Pn+1 (x) can be written as

The representation of the interpolation polynomial in the form (2.11) in a number of cases can be especially useful for practice.

Adams-Bashforth methods use already known values ​​at points xn, xn-1... xn-k. When constructing an interpolation polynomial, you can also use the points xn, xn, xn-1... xn-k. This gives rise to a class of implicit m-step methods known as Adams-Moulton methods.

If k=0, then P (x) - linear function, passing through the points (xn, fn) and (xn+1, fn+1), and the corresponding method

is the second order Adams-Moulton method.

For k=1, 2, 3 we obtain the corresponding methods

third, fourth and fifth orders of approximation. Relations (2.12) - (2.15) contain the desired values ​​of yn+1 implicitly, therefore, to implement them it is necessary to use iterative methods.

In practice, they usually do not directly solve equations (2.12) - (2.15), but use the explicit and implicit forms together, which leads to a method of prediction and correction.

For example, for the second-order Adams method, using the notation where r is the iteration number, we have the following calculation scheme for r = 1:

This process is called the PECE method (P means applying a predictive formula, C means applying a correction formula, E means calculating the function f). You can shorten the calculation process by discarding the last formula. This leads to the so-called PEC method.

Let's consider the second method for solving equations (2.12) - (2.15). Formulas (2.12) - (2.15) can be rewritten as

where gn contains known quantities. It has been proven that if, where L is the Lipschitz constant, then there is a unique solution to equation (2.17), which can be obtained using an iterative process

where - arbitrarily.

Iterations in expression (2.18) continue until convergence is achieved. In this case, the number of calculations of the function f varies from point to point and can be quite large.

On the other hand, if the value of h is reduced, then convergence can be achieved in a fixed number of iterations. This method is called correction to convergence.

At first glance, it may seem that the explicit multi-step method is the simplest method from a computational point of view. However, in practice, explicit methods are used very rarely. The implicit Adams-Moulton method is more accurate than the explicit Adams-Bashforth method. For example, the computational scheme for the 5th order Adams-Moulton method is as follows:

Adams methods up to the fifth order inclusive can be used to solve ordinary differential equations that do not require a high degree of accuracy.

As with the Adams-Bashforth method, when using the Adams-Moulton method important issue is the question of choosing the optimal ratio of the integration step and the order of the method. It should be noted that when creating effective algorithms and programs, increasing the order of the method is preferable to decreasing the integration step.

To solve more complex problems it is necessary to apply higher order Adams methods. Table 2.1 shows the coefficient values ​​for Adams' methods. The first line shows the order of the method; in the second - the values ​​of the coefficients Ck for the corresponding order k; in the following lines - pairs of coefficients Bkj and Mkj for the Adams-Bashforth and Adams-Moulton methods, respectively. Then, taking into account the data in Table 2. 14, the coefficients in j in the expression

for the Adams-Bashforth method of kth order can be found from the relation

and for the Adams-Moulton method of kth order using a similar formula

Formulas for Adams predictor-correction methods from the 6th to the 14th order are as follows:

  • 6th order:
  • 7th order:
  • 8th order:
  • 9th order:
  • 10th order:
  • 11th order:
  • 12th order:
  • 13th order:
  • 14th order:
  • 15th order:
  • 16th order:

It is preferable to use the formulas given above for the practical application of solving ordinary differential equations or systems of first-order differential equations with a constant integration step. If in the process of solving an equation the integration step is variable, then for Adams methods there are special moves to enter new initial data when changing the integration step.

We still have the same Cauchy problem.

f (1) (t)=F(t, f(t)), a£ t£ b, f(a)=f a.

In one-step methods the value f(tk+1) was determined only by the information at the previous point tk. It seems possible to increase the accuracy of the solution by using information at several previous points, if available. This is what is done in methods called multi-step. At first glance at the statement of the problem, it becomes obvious that at the moment of start t=t a there is only one initial condition and, if we are going to work with two, three or four previous points, then there is no way to get the second one, except using one-step methods. That's what they do; a “complex” solution algorithm might look like this:

in the first step, the second point is obtained using the one-step method, in the second, the third is obtained using the two-step method, in the third, the fourth is obtained using the three-step method, etc., until enough previous points are collected for the main method that is supposed to be used.

Another option is to obtain the entire starting set of points using a one-step method, such as fourth-order Runge-Kutta. Since multi-step methods are assumed to be more accurate, a larger number of intermediate points are usually used for the starting one-step method, i.e. work with shorter steps.

Multi-step algorithms can be created like this. Considering that

f(tk +1)=f(tk)+ ,

we can numerically integrate the right-hand side of the ODE under the integral sign. If we use the rectangle method (the interpolating polynomial for the integrable function is a constant), we obtain the usual Euler method. If you use 2 points and a first order interpolation polynomial

p(x)= ,

then integration using the trapezoidal method from tk before tk+1 will give the following algorithm:

f(tk +1)=f(tk)+0.5h(3Fk-Fk -1).

Similarly, for three points we will have a quadratic interpolating polynomial according to the data ( tk -2 , Fk -2), (tk -1 , Fk -1), (tk, Fk) and integration using Simpson's method will give the algorithm:

f(tk +1)=f(tk)+ (23Fk–16Fk -1 +5Fk -2).

For 4 points the polynomial will be cubic and its integration will give:

f(tk +1)=f(tk)+ (55Fk–59Fk -1 +37Fk -2 –9Fk -3).

In principle, we could continue this way indefinitely.

The given algorithms are called Adams-Bashforth methods of the second, third and fourth orders.

Formally, when constructing an interpolation polynomial, in addition to N use already calculated points and more R future tk +1 , tk+2 ; in the simplest case the set

tk +1 , tk, tk -1 ,…, tk -N .

This generates a class of so-called Adams-Moulton methods. In the four-step version, it operates on data ( tk +1 , Fk +1), (tk, Fk), (tk -1 , Fk -1), (tk -2 , Fk-2) and its algorithm:

f(tk +1)=f(tk)+ (9Fk +1 +19Fk–5Fk -1 +Fk -2).

It is, of course, impossible to carry out calculations based on missing data, so the Adams algorithms are combined into a sequence of the Adams-Bashforth and Adams-Moulton algorithms, thereby obtaining the so-called forecast and correction methods. For example, the fourth-order forecast and correction method looks like this: first we predict using the Adams-Bashforth algorithm using “past” points

f(tk +1)=f(tk)+ (55Fk–59Fk -1 +37Fk -2 –9Fk -3).

Then we calculate the approximate value of the right side of the equation

Fk +1 =F(tk +1 , f(tk +1).

And finally, we adjust f(tk+1) using its approximate value

f(tk +1)=f(tk)+ (9Fk +1 +19Fk–5Fk -1 +Fk -2).

The most effective computer programs available that allow the user to change the step size and method order are based on high order Adams methods (over 10). Experience with these programs shows that differences in their implementation can have a more significant impact on accuracy than differences in the internal properties of the methods themselves.