Numerical integration

For a certain function $f (x) $, we need to calculate the future state of $t+h $ from the state at time t. Calculate $f (t+h) $ from $f (t) $. To this end, we will expand $f (t) $ by Taylor expansion $f(x+\Delta x)=f(x)+f’(x)\Delta x+\frac{1}{2}f’’(x) \Delta x^2+\frac{1}{6}f’’’(x) \Delta x^3+\cdots$, so the following methods are all approximate truncations.We can classify the integration methods by taking the highest order of the Taylor expansion. If Euler integral only uses the first derivative, it is a first-order method. And its error is $O (n ^ 2) $, and after halving the step size $h $, the error becomes a quarter of the original error.

Euler integral

The classical expression of approaching curves with a line

Explicit Euler

The simplest integration method, however, may have numerical expansion

$\vec v(t_1)=\vec v(t_0)+M^{-1}\vec F(t_0)\Delta t $

$x(t_1)=x(t_0)+\vec v(t_0)\Delta t$

Implicit Euler

Updating the present with future states requires solving linear systems with numerical dissipation

$ \vec v(t_1)=\vec v(t_0)+M^{-1}\vec F(t_1)\Delta t $

$ x(t_1)=x(t_0)+\vec v(t_1)\Delta t $

Semi_Implicit Euler

The combination of explicit and implicit Euler for numerical stability

$\vec v(t_1)=\vec v(t_0)+M^{-1}\vec F(t_0)\Delta t$

$x(t_1)=x(t_0)+\vec v(t_1)\Delta t$

Leap Frog

A variant of semi implicit Euler, which separates velocity and displacement by half an hour and updates them using the midpoint method, resulting in smaller errors

$ \vec v(t_{0.5})=\vec v(t_{-0.5})+M^{-1}\vec F(t_0)\Delta t $

$ x(t_1)=x(t_0)+\vec v(t_{0.5})\Delta t $

Velocity verlet

Introducing velocity to simplify the Welley integral and changing the midpoint method to the trapezoidal method compared to the frog jumping method, so that velocity and position can be synchronized in time

$ x(t_1)=x(t_0)+\vec v(t_0)\Delta t+\frac{1}{2}M^{-1}\vec F(t_0)\Delta t^2 $

$ \vec v(t_1)=\vec v(t_0)+\frac{1}{2}M^{-1}(\vec F(t_0)+\vec F(t_1))\Delta t $

Runge Kutta

The Euler method is actually a first-order Runge Kutta, while the higher-order Runge Kutta uses an approximation of the derivative at the midpoint to estimate the subsequent derivative using the previous derivative

RK2

$ \vec v(t_1)=\vec v(t_0)+M^{-1}\vec F(t_0)\Delta t $

$ x(t_{0.5})=x(t_0)+\frac{1}{2}\vec v(t_0)\Delta t $

$ \vec v(t_{0.5})=\vec v(t_0)+\frac{1}{2}M^{-1}\vec F(t_{0.5})\Delta t $

$ x(t_1) = x(t_0)+\vec v(t_{0.5})\Delta t $

Linear system

The classical solution to the problem of $Ax=b$ is too costly to directly solve the inverse, so each individual needs to optimize based on their own abilities

Matrix decomposition

Decompose a complex matrix into the product of several simple matrices to facilitate inversion. Taking the eigen library as an example, the most commonly used ones are LU decomposition (which decomposes into the product of lower triangular matrix $L$ and upper triangular matrix $ U $), LLT decomposition (which requires positive definite symmetry of $A $, decomposes into the product of lower triangular matrix $L $ and its transposed $L^T$, with a speed twice that of LU decomposition), and the specific method is Gaussian elimination

Linear iteration

Stop after a certain number of iterations or reaching a certain accuracy

Jacobi iteration

For $Ax=b$, splitting $A$ into $D-L-R$
$$
A= \left[ \begin{matrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{matrix} \right] \ b= \left[ \begin{matrix} b_1 \\ b_2 \\ b_3 \end{matrix} \right] \ D= \left[ \begin{matrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33} \end{matrix} \right]
$$

$$
L= \left[ \begin{matrix} 0 & 0 & 0 \\ a_{21} & 0& 0\\ a_{31} & a_{32} & 0 \end{matrix} \right] \ R= \left[ \begin{matrix} 0 & a_{12} & a_{13} \\ 0 & 0 & a_{23} \\ 0 & 0 & 0 \end{matrix} \right] \ f= \left[ \begin{matrix} \frac{b_1}{a_{11}} \\ \frac{b_2}{a_{22}} \\ \frac{b_3}{a_{33}} \end{matrix} \right]
$$
$$B=I-D^{-1}A,x_{k+1} =Bx_k+f$$

Gauss-Seidel iteration

In Jacobi iteration, the result will only be applied to the next round after each round of calculation is completed. If it is updated in a timely manner after the current round of calculation, the efficiency is intuitively better than Jacobian iteration, but Jacobian iteration is naturally suitable for parallelism
$$
Jacobi \begin{cases} \begin{matrix} x_1^{k+1}=\frac{1}{a_{11}}(-a_{12}x_2^k-a_{13}x_3^k+b_1)\\ x_2^{k+1}=\frac{1}{a_{22}}(-a_{21}x_1^k-a_{23}x_3^k+b_2)\\ x_3^{k+1}=\frac{1}{a_{33}}(-a_{31}x_1^k-a_{32}x_2^k+b_3) \end{matrix} \end{cases}
$$

$$
Gauss-Seidel\begin{cases} \begin{matrix} x_1^{k+1}=\frac{1}{a_{11}}(-a_{12}x_2^k-a_{13}x_3^k+b_1)\\ x_2^{k+1}=\frac{1}{a_{22}}(-a_{21}x_1^{k+1}-a_{23}x_3^k+b_2)\\ x_3^{k+1}=\frac{1}{a_{33}}(-a_{31}x_1^{k+1}-a_{32}x_2^{k+1}+b_3) \end{matrix} \end{cases}
$$