Ordinary Differential Equations, Applications and Discretizations
Chris Rackauckas
October 1st, 2020
Youtube Video Link Part 1
Youtube Video Link Part 2
Now that we have a sense of parallelism, let's return back to our thread on scientific machine learning to start constructing parallel algorithms for integration of scientific models. We previously introduced discrete dynamical systems and their asymptotic behavior. However, many physical systems are not discrete and are in fact continuous. In this discussion we will understand how to numerically compute ordinary differential equations by transforming them into discrete dynamical systems, and use this to come up with simulation techniques for physical systems.
What is an Ordinary Differential Equation?
An ordinary differential equation is an equation defined by a relationship on the derivative. In its general form we have that
\[ u' = f(u,p,t) \]
describes the evolution of some variable $u(t)$ which we would like to solve for. In its simplest sense, the solution to the ordinary differential equation is just the integral, since by taking the integral of both sides and applying the Fundamental Theorem of Calculus we have that
\[ u = \int_{t_0}^{t_f} f(u,p,t)dt \]
The difficulty of this equation is that the variable $u(t)$ is unknown and dependent on $t$, meaning that the integral cannot readily be solved by simple calculus. In fact, in almost all cases there exists no analytical solution for $u$ which is readily available. However, we can understand the behavior by looking at some simple cases.
Solving Ordinary Differential Equations in Julia
To solve an ordinary differential equation in Julia, one can use the DifferentialEquations.jl package to define the differential equation you'd like to solve. Let's say we want to solve the Lorenz equations:
\[ \begin{align} \frac{dx}{dt} &= σ(y-x) \\ \frac{dy}{dt} &= x(ρ-z) - y \\ \frac{dz}{dt} &= xy - βz \\ \end{align} \]
which was the system used in our investigation of discrete dynamics. The first thing we need to do is give it this differential equation. We can either write it in an in-place form f(du,u,p,t)
or an out-of-place form f(u,p,t)
. Let's write it in the in-place form:
function lorenz(du,u,p,t) du[1] = p[1]*(u[2]-u[1]) du[2] = u[1]*(p[2]-u[3]) - u[2] du[3] = u[1]*u[2] - p[3]*u[3] end
lorenz (generic function with 3 methods)
Question: How could I maybe speed this up a little?
Next we give an initial condition. Here, this is a vector of equations, so our initial condition has to be a vector. Let's choose the following initial condition:
u0 = [1.0,0.0,0.0]
3-element Vector{Float64}: 1.0 0.0 0.0
Notice that I made sure to use Float64
values in the initial condition. The Julia library's functions are generic and internally use the corresponding types that you give it. Integer types do not bode well for continuous problems.
Next, we have to tell it the timespan to solve on. Here, let's some from time 0 to 100. This means that we would use:
tspan = (0.0,100.0)
(0.0, 100.0)
Now we need to define our parameters. We will use the same ones as from our discrete dynamical system investigation.
p = (10.0,28.0,8/3)
(10.0, 28.0, 2.6666666666666665)
These describe an ODEProblem
. Let's bring in DifferentialEquations.jl and define the ODE:
using DifferentialEquations prob = ODEProblem(lorenz,u0,tspan,p)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true timespan: (0.0, 100.0) u0: 3-element Vector{Float64}: 1.0 0.0 0.0
Now we can solve it by calling solve
:
sol = solve(prob)
retcode: Success Interpolation: specialized 4th order "free" interpolation, specialized 2nd order "free" stiffness-aware interpolation t: 1281-element Vector{Float64}: 0.0 3.5678604836301404e-5 0.0003924646531993154 0.0032624087100077666 0.009058076582749423 0.016956470605311864 0.027689959227781235 0.04185635103821218 0.060240410627700816 0.0836854113984534 ⋮ 99.4386918268877 99.53430077629385 99.62315909447312 99.70299380266312 99.7727905290976 99.8468719463341 99.92909223837675 99.99676433660723 100.0 u: 1281-element Vector{Vector{Float64}}: [1.0, 0.0, 0.0] [0.9996434557625105, 0.0009988049817849058, 1.781434788799189e-8] [0.9961045497425811, 0.010965399721242457, 2.1469553658389193e-6] [0.9693591550149778, 0.08977063252764937, 0.0001438019170127846] [0.924204355043198, 0.242289149116772, 0.0010461625397616113] [0.8800455796215916, 0.4387364900041282, 0.0034242599668253874] [0.8483309836977301, 0.6915629475762161, 0.008487624968655677] [0.8495036679850485, 1.0145426495980538, 0.018212090123888365] [0.9139069520070257, 1.4425599557988966, 0.03669382071284047] [1.0888638157267199, 2.052326562845511, 0.07402573450924263] ⋮ [3.8662182925863604, 6.477675362450072, 13.7026651297458] [7.596364784839832, 12.931698345552782, 15.295013031866944] [12.999465900688321, 18.69084638217891, 25.77003643982645] [14.725104821648667, 12.127682492556511, 37.85997432096073] [10.45813774612691, 2.2355517835584817, 37.102951902956555] [4.858569386282909, -1.238820833292597, 30.50492838887987] [1.4267543691218378, -1.0945246877110832, 24.22832564643038] [0.2911991490440339, -0.7513530324542533, 20.18373424388144] [0.25815851151351493, -0.7419175275518352, 20.00966894969765]
To see what the solution looks like, we can call plot
:
using Plots plot(sol)
We can also plot phase space diagrams by telling it which vars
to compare on which axis. Let's plot this in the (x,y,z)
plane:
plot(sol,vars=(1,2,3))
Note that the sentinal to time is 0
, so we can also do (t,y,z)
with:
plot(sol,vars=(0,2,3))
The equation is continuous and therefore the solution is continuous. We can see this by checking how it is at any random time value:
sol(0.5)
3-element Vector{Float64}: 6.503654868223262 -8.508354686562326 38.091997244775655
which gives the current evolution at that time point.
Differential Equations from Scientific Contexts
N-Body Problems and Astronomy
There are many different contexts in which differential equations show up. In fact, it's not a stretch to say that the laws in all fields of science are encoded in differential equations. The starting point for physics is Newton's laws of gravity, which define an N-body ordinary differential equation system by describing the force between two particles as:
\[ F = G \frac{m_1m_2}{r^2} \]
where $r^2$ is the Euclidean distance between the two particles. From here, we use the fact that
\[ F = ma \]
to receive differential equations in terms of the accelerations of each particle. The differential equation is a system, where we know the change in position is due to the current velocity:
\[ x' = v \]
and the change in velocity is the acceleration:
\[ v' = F/m = G \frac{m_i}{r_i^2} \]
where $i$ runs over the other particles. Thus we have a vector of position derivatives and a vector of velocity derivatives that evolve over time to give the evolving positions and velocity.
An example of this is the Pleiades problem, which is an approximation to a 7-star chaotic system. It can be written as:
using OrdinaryDiffEq function pleiades(du,u,p,t) @inbounds begin x = view(u,1:7) # x y = view(u,8:14) # y v = view(u,15:21) # x′ w = view(u,22:28) # y′ du[1:7] .= v du[8:14].= w for i in 15:28 du[i] = zero(u[1]) end for i=1:7,j=1:7 if i != j r = ((x[i]-x[j])^2 + (y[i] - y[j])^2)^(3/2) du[14+i] += j*(x[j] - x[i])/r du[21+i] += j*(y[j] - y[i])/r end end end end tspan = (0.0,3.0) prob = ODEProblem(pleiades,[3.0,3.0,-1.0,-3.0,2.0,-2.0,2.0,3.0,-3.0,2.0,0,0,-4.0,4.0,0,0,0,0,0,1.75,-1.5,0,0,0,-1.25,1,0,0],tspan)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true timespan: (0.0, 3.0) u0: 28-element Vector{Float64}: 3.0 3.0 -1.0 -3.0 2.0 -2.0 2.0 3.0 -3.0 2.0 ⋮ 1.75 -1.5 0.0 0.0 0.0 -1.25 1.0 0.0 0.0
where we assume $m_i = i$. When we solve this equation we receive the following:
sol = solve(prob,Vern8(),abstol=1e-10,reltol=1e-10) plot(sol)
tspan = (0.0,200.0) prob = ODEProblem(pleiades,[3.0,3.0,-1.0,-3.0,2.0,-2.0,2.0,3.0,-3.0,2.0,0,0,-4.0,4.0,0,0,0,0,0,1.75,-1.5,0,0,0,-1.25,1,0,0],tspan) sol = solve(prob,Vern8(),abstol=1e-10,reltol=1e-10) plot(sol,vars=((1:7),(8:14)))
Population Ecology: Lotka-Volterra
Population ecology's starting point is the Lotka-Volterra equations which describes the interactions between a predator and a prey. In this case, the prey grows at an exponential rate but has a term that reduces its population by being eaten by the predator. The predator's growth is dependent on the available food (the amount of prey) and has a decay rate due to old age. This model is then written as follows:
function lotka(du,u,p,t) du[1] = p[1]*u[1] - p[2]*u[1]*u[2] du[2] = -p[3]*u[2] + p[4]*u[1]*u[2] end p = [1.5,1.0,3.0,1.0] prob = ODEProblem(lotka,[1.0,1.0],(0.0,10.0),p) sol = solve(prob) plot(sol)
Biochemistry: Robertson Equations
Biochemical equations commonly display large separation of timescales which lead to a stiffness phenomena that will be investigated later. The classic "hard" equations for ODE integration thus tend to come from biology (not physics!) due to this property. One of the standard models is the Robertson model, which can be described as:
using Sundials, ParameterizedFunctions function rober(du,u,p,t) y₁,y₂,y₃ = u k₁,k₂,k₃ = p du[1] = -k₁*y₁+k₃*y₂*y₃ du[2] = k₁*y₁-k₂*y₂^2-k₃*y₂*y₃ du[3] = k₂*y₂^2 end prob = ODEProblem(rober,[1.0,0.0,0.0],(0.0,1e5),(0.04,3e7,1e4)) sol = solve(prob,Rosenbrock23()) plot(sol)
plot(sol, xscale=:log10, tspan=(1e-6, 1e5), layout=(3,1))
Chemical Physics: Pollution Models
Chemical reactions in physical models are also described as differential equation systems. The following is a classic model of dynamics between different species of pollutants:
k1=.35e0 k2=.266e2 k3=.123e5 k4=.86e-3 k5=.82e-3 k6=.15e5 k7=.13e-3 k8=.24e5 k9=.165e5 k10=.9e4 k11=.22e-1 k12=.12e5 k13=.188e1 k14=.163e5 k15=.48e7 k16=.35e-3 k17=.175e-1 k18=.1e9 k19=.444e12 k20=.124e4 k21=.21e1 k22=.578e1 k23=.474e-1 k24=.178e4 k25=.312e1 p = (k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16,k17,k18,k19,k20,k21,k22,k23,k24,k25) function f(dy,y,p,t) k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16,k17,k18,k19,k20,k21,k22,k23,k24,k25 = p r1 = k1 *y[1] r2 = k2 *y[2]*y[4] r3 = k3 *y[5]*y[2] r4 = k4 *y[7] r5 = k5 *y[7] r6 = k6 *y[7]*y[6] r7 = k7 *y[9] r8 = k8 *y[9]*y[6] r9 = k9 *y[11]*y[2] r10 = k10*y[11]*y[1] r11 = k11*y[13] r12 = k12*y[10]*y[2] r13 = k13*y[14] r14 = k14*y[1]*y[6] r15 = k15*y[3] r16 = k16*y[4] r17 = k17*y[4] r18 = k18*y[16] r19 = k19*y[16] r20 = k20*y[17]*y[6] r21 = k21*y[19] r22 = k22*y[19] r23 = k23*y[1]*y[4] r24 = k24*y[19]*y[1] r25 = k25*y[20] dy[1] = -r1-r10-r14-r23-r24+ r2+r3+r9+r11+r12+r22+r25 dy[2] = -r2-r3-r9-r12+r1+r21 dy[3] = -r15+r1+r17+r19+r22 dy[4] = -r2-r16-r17-r23+r15 dy[5] = -r3+r4+r4+r6+r7+r13+r20 dy[6] = -r6-r8-r14-r20+r3+r18+r18 dy[7] = -r4-r5-r6+r13 dy[8] = r4+r5+r6+r7 dy[9] = -r7-r8 dy[10] = -r12+r7+r9 dy[11] = -r9-r10+r8+r11 dy[12] = r9 dy[13] = -r11+r10 dy[14] = -r13+r12 dy[15] = r14 dy[16] = -r18-r19+r16 dy[17] = -r20 dy[18] = r20 dy[19] = -r21-r22-r24+r23+r25 dy[20] = -r25+r24 end
f (generic function with 2 methods)
u0 = zeros(20) u0[2] = 0.2 u0[4] = 0.04 u0[7] = 0.1 u0[8] = 0.3 u0[9] = 0.01 u0[17] = 0.007 prob = ODEProblem(f,u0,(0.0,60.0),p) sol = solve(prob,Rodas5())
retcode: Success Interpolation: specialized 4rd order "free" stiffness-aware interpolation t: 29-element Vector{Float64}: 0.0 0.0013845590497824308 0.003242540880561935 0.007901605227525086 0.016011572765091416 0.02740615678429701 0.044612092501151696 0.07720629370280543 0.11607398786651008 0.1763063703057767 ⋮ 3.676321507769747 5.344945477147836 7.985769209063678 11.981251310096694 17.452504634746386 24.882247321193052 34.66462280306778 47.27763232418448 60.0 u: 29-element Vector{Vector{Float64}}: [0.0, 0.2, 0.0, 0.04, 0.0, 0.0, 0.1, 0.3, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0 .0, 0.0, 0.007, 0.0, 0.0, 0.0] [0.0002935083676916062, 0.19970649101355778, 1.693835912963263e-10, 0.0397 0673366392418, 1.1424841091201038e-7, 1.0937647553427759e-7, 0.099999666764 40009, 0.3000003350396963, 0.00999998209858389, 5.6037724774041996e-9, 6.04 7938337401518e-9, 1.004757080331196e-8, 5.9812281205517996e-12, 6.239553394 674922e-9, 2.3022538431539757e-10, 3.129330582279999e-17, 0.006999999417662 247, 5.823377531818462e-10, 3.824053893000698e-10, 6.930819262664893e-14] [0.0006836584262738185, 0.19931633629695783, 1.9606016564783855e-10, 0.039 31745222652303, 2.0821314359971526e-7, 2.5242637698192594e-7, 0.09999884087 661927, 0.30000116344748634, 0.009999897466494299, 2.060514512836631e-8, 1. 6844788957556812e-8, 8.136618249763386e-8, 1.0724538885842355e-10, 6.486750 944005252e-8, 3.1010416395449586e-9, 3.098650817258985e-17, 0.0069999964441 40231, 3.555859768109287e-9, 2.0643860873886225e-9, 2.0476275988812544e-12] [0.0016428179821615391, 0.19835713123489693, 2.6245924414625437e-10, 0.038 3623109751218, 3.515273109946351e-7, 4.664572523382881e-7, 0.09999544921365 162, 0.3000045632051474, 0.00999947365101553, 4.4964384052575006e-8, 3.3350 28459989388e-8, 4.812858451973164e-7, 1.440960635479933e-9, 4.4444645014965 83e-7, 3.745751745418737e-8, 3.0233751050330334e-17, 0.006999981334742691, 1.866525730956085e-8, 1.1746742713316081e-8, 6.886039278040625e-11] [0.0032462849525836317, 0.19675344580878912, 3.7349154721240826e-10, 0.036 76859821471883, 4.320335375819997e-7, 5.784969040588635e-7, 0.0999875364694 9957, 0.30001250073250896, 0.00999841279050207, 5.847792696989046e-8, 4.228 3730165807424e-8, 1.5155870313913987e-6, 8.52498025471751e-9, 1.46153460803 1614e-6, 2.142214641931594e-7, 2.897772883427687e-17, 0.006999943344407836, 5.6655592163596007e-8, 4.436849665943825e-8, 1.061843105430168e-9] [0.005361393004057221, 0.19463777403944538, 5.199898853042826e-10, 0.03466 792308144617, 4.301350110500244e-7, 5.629734298792613e-7, 0.099975809628932 95, 0.30002429018799176, 0.00999682035482297, 5.777644541817105e-8, 4.15114 5797901329e-8, 3.075243316490961e-6, 2.7267223985258345e-8, 2.9888963034290 736e-6, 6.762049531575719e-7, 2.732216410413453e-17, 0.006999886274019657, 1.1372598034289956e-7, 1.135972526206819e-7, 7.943533863137092e-9] [0.008274964935351176, 0.19172294848166443, 7.218269559835735e-10, 0.03177 41799287896, 3.9826084453929684e-7, 5.005231467491415e-7, 0.099959353944089 18, 0.30004089901876024, 0.00999460673549988, 5.178145386630504e-8, 3.71499 2485079132e-8, 5.229419020007247e-6, 6.871429675796394e-8, 5.04063723361197 1e-6, 1.6909614724980253e-6, 2.5041573913987372e-17, 0.006999806991160331, 1.9300883966852182e-7, 2.3808903519156558e-7, 4.44090900227011e-8] [0.013002670675854911, 0.18699194546955783, 1.049323517342023e-9, 0.027078 33467623004, 3.596573982134101e-7, 4.230620972419726e-7, 0.0999319171933284 9, 0.3000687867927592, 0.009990985412449237, 4.426632357885486e-8, 3.171747 0184457415e-8, 8.707148275200555e-6, 1.753988228736806e-7, 8.15954182882009 5e-6, 4.28235499736743e-6, 2.134072762166874e-17, 0.006999677462783783, 3.2 253721621608106e-7, 4.2925313031200234e-7, 2.484238183958042e-7] [0.01754950653069442, 0.18244010960656096, 1.3641806124928415e-9, 0.022563 570556883728, 3.361162798275428e-7, 3.7150093428726335e-7, 0.09990309462743 716, 0.30009836544883595, 0.009987255299681627, 3.931687421453365e-8, 2.813 0812802630142e-8, 1.2231150049080595e-5, 3.3462364587004975e-7, 1.103334852 290569e-5, 8.113769678679281e-6, 1.7782593323554993e-17, 0.0069995442449338 795, 4.557550661206859e-7, 5.171121404557448e-7, 7.09178639855165e-7] [0.022860660993157974, 0.17711993361265657, 1.7318456679719292e-9, 0.01729 5319170346254, 3.1879075517672294e-7, 3.280175981175663e-7, 0.0998630917757 628, 0.30013989594565554, 0.009982163705371506, 3.519652711590188e-8, 2.513 6768154051193e-8, 1.6956872533522293e-5, 6.253075543101413e-7, 1.4391910133 08264e-5, 1.50368204130258e-5, 1.3630627583154951e-17, 0.006999362663023951 , 6.373369760492056e-7, 5.03963403290715e-7, 1.6196514074344935e-6] ⋮ [0.038929997230639016, 0.1604402370546664, 2.8511492881988125e-9, 0.003179 5600755990376, 3.097289790606946e-7, 2.6025228296017123e-7, 0.0980811460422 5779, 0.3021129375348955, 0.009756427876936319, 2.8857054777614546e-8, 2.05 07650745238614e-8, 0.00021451971258084569, 2.431311049377879e-5, 2.98448630 49392085e-5, 0.0005806906609807025, 2.5058455898662935e-18, 0.0069912600590 668235, 8.739940933174325e-6, 5.644795502477372e-7, 1.2098731834913303e-5] [0.03973179628746286, 0.15934875829742107, 2.9100311071466143e-9, 0.003267 839724450699, 3.0340454593255903e-7, 2.533682719229087e-7, 0.09727159520683 847, 0.3030166682186242, 0.009654512408596435, 2.8039000372424105e-8, 1.991 446359639734e-8, 0.0003034866969101804, 3.5156934082258204e-5, 2.8843324342 223704e-5, 0.0008554037161571968, 2.5754197332981385e-18, 0.006987546308720 154, 1.245369127984426e-5, 6.38402560367697e-7, 1.412318115810056e-5] [0.04094318678623545, 0.15768509187834284, 2.998971487551486e-9, 0.0034038 157823948144, 2.939882128755268e-7, 2.432060017297549e-7, 0.096031887134919 2, 0.30439899100575607, 0.009500566541513964, 2.684636948144903e-8, 1.90497 69914690377e-8, 0.0004379930888116984, 5.130948406305052e-5, 2.731177344979 7e-5, 0.00128637594351225, 2.6825839311824925e-18, 0.006981869477596392, 1. 813052240360596e-5, 7.249055971399578e-7, 1.6655501124600914e-5] [0.042654560998625565, 0.15530170721543887, 3.124615506027167e-9, 0.003601 5752224973344, 2.814477983198605e-7, 2.297205494308004e-7, 0.09424347009700 183, 0.30638970302322815, 0.009282897844674821, 2.5287303426021958e-8, 1.79 19684960259295e-8, 0.0006285309299580907, 7.356402881817146e-5, 2.531107615 2674938e-5, 0.0019296590487066826, 2.8384402789327385e-18, 0.00697370074952 45485, 2.6299250475450134e-5, 8.253072229358662e-7, 1.9841700593870337e-5] [0.04479393166011822, 0.1522628957931706, 3.2817111347737024e-9, 0.0038589 221832575453, 2.668944396446827e-7, 2.1414675862675283e-7, 0.09194251736341 38, 0.3089455007693317, 0.009010154414799035, 2.3520931242225695e-8, 1.6639 788698595936e-8, 0.0008681120853301312, 0.00010022358782918632, 2.305697615 9432377e-5, 0.0027942303053253743, 3.0412581944159078e-18, 0.00696321996493 9302, 3.6780035060698035e-5, 9.439336216189219e-7, 2.3887359967470697e-5] [0.047387957127462514, 0.14848196819379952, 3.472264655035833e-9, 0.004187 71112053898, 2.5064148591231437e-7, 1.968814170125399e-7, 0.089048105422581 44, 0.31215285289507333, 0.008678047248147589, 2.160558060909924e-8, 1.5252 57942471423e-8, 0.0011615458041670688, 0.0001303593939193297, 2.06304833049 8088e-5, 0.003939836311621366, 3.3003803021585438e-18, 0.006950068615422143 , 4.993138457785821e-5, 1.0960311780641922e-6, 2.939147100955324e-5] [0.050368820662391345, 0.14399340437115468, 3.6913545076957702e-9, 0.00459 1322479293654, 2.3347510067678857e-7, 1.788424362164904e-7, 0.0855674941007 5941, 0.3160000520160918, 0.008293774589266537, 1.964965248104398e-8, 1.383 6777792991125e-8, 0.001504105899573151, 0.0001612874114336738, 1.8176658967 239384e-5, 0.005401518209025107, 3.618470767288331e-18, 0.00693427969957088 , 6.572030042912137e-5, 1.2906023386937852e-6, 3.683937182822664e-5] [0.05364912800384795, 0.13885256634333884, 3.9326277403452116e-9, 0.005072 852392385741, 2.1601058968874195e-7, 1.6077416721927846e-7, 0.0815216569184 5745, 0.3204605702073427, 0.007866251982276732, 1.7728584506478046e-8, 1.24 47110292692588e-8, 0.001889928666981539, 0.0001897490305316394, 1.579955879 6591355e-5, 0.007213605828677913, 3.997969685509976e-18, 0.0069159305165517 725, 8.406948344822901e-5, 1.5343173858417736e-6, 4.670823810895209e-5] [0.056462547201107216, 0.1342484201017105, 4.13973378193682e-9, 0.00552313 9725450954, 2.0189808613219934e-7, 1.464545080585168e-7, 0.0778424951212836 , 0.3245075304667328, 0.007494014171885851, 1.6222966604666746e-8, 1.135866 3790538106e-8, 0.002230505164655306, 0.00020871630726229692, 1.396934873781 4063e-5, 0.008964884997558698, 4.352845989434378e-18, 0.006899219722669948, 0.0001007802773300533, 1.772152105034309e-6, 5.682962012818102e-5]
plot(sol)
plot(sol, xscale=:log10, tspan=(1e-6, 60), layout=(3,1))
Geometric Properties
Linear Ordinary Differential Equations
The simplest ordinary differential equation is the scalar linear ODE, which is given in the form
\[ u' = \alpha u \]
We can solve this by noticing that $(e^{\alpha t})^\prime = \alpha e^{\alpha t}$ satisfies the differential equation and thus the general solution is:
\[ u(t) = u(0)e^{\alpha t} \]
From the analytical solution we have that:
If $Re(\alpha) > 0$ then $u(t) \rightarrow \infty$ as $t \rightarrow \infty$
If $Re(\alpha) < 0$ then $u(t) \rightarrow 0$ as $t \rightarrow \infty$
If $Re(\alpha) = 0$ then $u(t)$ has a constant or periodic solution.
This theory can then be extended to multivariable systems in the same way as the discrete dynamics case. Let $u$ be a vector and have
\[ u' = Au \]
be a linear ordinary differential equation. Assuming $A$ is diagonalizable, we diagonalize $A = P^{-1}DP$ to get
\[ Pu' = DPu \]
and change coordinates $z = Pu$ so that we have
\[ z' = Dz \]
which decouples the equation into a system of linear ordinary differential equations which we solve individually. Thus we see that, similarly to the discrete dynamical system, we have that:
If all of the eigenvalues negative, then $u(t) \rightarrow 0$ as $t \rightarrow \infty$
If any eigenvalue is positive, then $u(t) \rightarrow \infty$ as $t \rightarrow \infty$
Nonlinear Ordinary Differential Equations
As with discrete dynamical systems, the geometric properties extend locally to the linearization of the continuous dynamical system as defined by:
\[ u' = \frac{df}{du} u \]
where $\frac{df}{du}$ is the Jacobian of the system. This is a consequence of the Hartman-Grubman Theorem.
Numerically Solving Ordinary Differential Equations
Euler's Method
To numerically solve an ordinary differential equation, one turns the continuous equation into a discrete equation by discretizing it. The simplest discretization is the Euler method. The Euler method can be thought of as a simple approximation replacing $dt$ with a small non-infinitesimal $\Delta t$. Thus we can approximate
\[ f(u,p,t) = u' = \frac{du}{dt} \approx \frac{\Delta u}{\Delta t} \]
and now since $\Delta u = u_{n+1} - u_n$ we have that
\[ \Delta t f(u,p,t) = u_{n+1} - u_n \]
We need to make a choice as to where we evaluate $f$ at. The simplest approximation is to evaluate it at $t_n$ with $u_n$ where we already have the data, and thus we re-arrange to get
\[ u_{n+1} = u_n + \Delta t f(u,p,t) \]
This is the Euler method.
We can interpret it more rigorously by looking at the Taylor series expansion. First write out the Taylor series for the ODE's solution in the near future:
\[ u(t+\Delta t) = u(t) + \Delta t u'(t) + \frac{\Delta t^2}{2} u''(t) + \ldots \]
Recall that $u' = f(u,p,t)$ by the definition of the ODE system, and thus we have that
\[ u(t+\Delta t) = u(t) + \Delta t f(u,p,t) + \mathcal{O}(\Delta t^2) \]
This is a first order approximation because the error in our step can be expressed as an error in the derivative, i.e.
\[ \frac{u(t + \Delta t) - u(t)}{\Delta t} = f(u,p,t) + \mathcal{O}(\Delta t) \]
Higher Order Methods
We can use this analysis to extend our methods to higher order approximation by simply matching the Taylor series to a higher order. Intuitively, when we developed the Euler method we had to make a choice:
\[ u_{n+1} = u_n + \Delta t f(u,p,t) \]
where do we evaluate $f$? One may think that the best derivative approximation my come from the middle of the interval, in which case we might want to evaluate it at $t + \frac{\Delta t}{2}$. To do so, we can use the Euler method to approximate the value at $t + \frac{\Delta t}{2}$ and then use that value to approximate the derivative at $t + \frac{\Delta t}{2}$. This looks like:
\[ k_1 = f(u_n,p,t)\\ k_2 = f(u_n + \frac{\Delta t}{2} k_1,p,t + \frac{\Delta t}{2})\\ u_{n+1} = u_n + \Delta t k_2 \]
which we can also write as:
\[ u_{n+1} = u_n + \Delta t f(u_n + \frac{\Delta t}{2} f_n,p,t + \frac{\Delta t}{2}) \]
where $f_n = f(u_n,p,t)$. If we do the two-dimensional Taylor expansion we get:
\[ u_{n+1} = u_n + \Delta t f_n + \frac{\Delta t^2}{2}(f_t + f_u f)(u_n,p,t)\\ + \frac{\Delta t^3}{6} (f_{tt} + 2f_{tu}f + f_{uu}f^2)(u_n,p,t) \]
which when we compare against the true Taylor series:
\[ u(t+\Delta t) = u_n + \Delta t f(u_n,p,t) + \frac{\Delta t^2}{2}(f_t + f_u f)(u_n,p,t) + \frac{\Delta t^3}{6}(f_{tt} + 2f_{tu} + f_{uu}f^2 + f_t f_u + f_u^2 f)(u_n,p,t) \]
and thus we see that
\[ u(t + \Delta t) - u_n = \mathcal{O}(\Delta t^3) \]
Runge-Kutta Methods
More generally, Runge-Kutta methods are of the form:
\[ k_1 = f(u_n,p,t)\\ k_2 = f(u_n + \Delta t (a_{21} k_1),p,t + \Delta t c_2)\\ k_3 = f(u_n + \Delta t (a_{31} k_1 + a_{32} k_2),p,t + \Delta t c_3)\\ \vdots \\ u_{n+1} = u_n + \Delta t (b_1 k_1 + \ldots + b_s k_s) \]
where $s$ is the number of stages. These can be expressed as a tableau:
The order of the Runge-Kutta method is simply the number of terms in the Taylor series that ends up being matched by the resulting expansion. For example, for the 4th order you can expand out and see that the following equations need to be satisfied:
The classic Runge-Kutta method is also known as RK4 and is the following 4th order method:
\[ k_1 = f(u_n,p,t)\\ k_2 = f(u_n + \frac{\Delta t}{2} k_1,p,t + \frac{\Delta t}{2})\\ k_3 = f(u_n + \frac{\Delta t}{2} k_2,p,t + \frac{\Delta t}{2})\\ k_4 = f(u_n + \Delta t k_3,p,t + \Delta t)\\ u_{n+1} = u_n + \frac{\Delta t}{6}(k_1 + 2 k_2 + 2 k_3 + k_4)\\ \]
While it's widely known and simple to remember, it's not necessarily good. The way to judge a Runge-Kutta method is by looking at the size of the coefficient of the next term in the Taylor series: if it's large then the true error can be larger, even if it matches another one asymptotically.
What Makes a Good Method?
Leading Truncation Coefficients
For given orders of explicit Runge-Kutta methods, lower bounds for the number of f
evaluations (stages) required to receive a given order are known:
While unintuitive, using the method is not necessarily the one that reduces the coefficient the most. The reason is because what is attempted in ODE solving is precisely the opposite of the analysis. In the ODE analysis, we're looking at behavior as $\Delta t \rightarrow 0$. However, when efficiently solving ODEs, we want to use the largest $\Delta t$ which satisfies error tolerances.
The most widely used method is the Dormand-Prince 5th order Runge-Kutta method, whose tableau is represented as:
Notice that this method takes 7 calls to f
for 5th order. The key to this method is that it has optimized leading truncation error coefficients, under some extra assumptions which allow for the analysis to be simplified.
Looking at the Effects of RK Method Choices and Code Optimizations
Pulling from the SciML Benchmarks, we can see the general effect of these different properties on a given set of Runge-Kutta methods:
Here, the order of the method is given in the name. We can see one immediate factor is that, as the requested error in the calculation decreases, the higher order methods become more efficient. This is because to decrease error, you decrease $\Delta t$, and thus the exponent difference with respect to $\Delta t$ has more of a chance to pay off for the extra calls to f
. Additionally, we can see that order is not the only determining factor for efficiency: the Vern8 method seems to have a clear approximate 2.5x performance advantage over the whole span of the benchmark compared to the DP8 method, even though both are 8th order methods. This is because of the leading truncation terms: with a small enough $\Delta t$, the more optimized method (Vern8) will generally have low error in a step for the same $\Delta t$ because the coefficients in the expansion are generally smaller.
This is a factor which is generally ignored in high level discussions of numerical differential equations, but can lead to orders of magnitude differences! This is highlighted in the following plot:
Here we see ODEInterface.jl's ODEInterfaceDiffEq.jl wrapper into the SciML common interface for the standard dopri
method from Fortran, and ODE.jl, the original ODE solvers in Julia, have a performance disadvantage compared to the DifferentialEquations.jl methods due in part to some of the coding performance pieces that we discussed in the first few lectures.
Specifically, a large part of this can be attributed to inlining of the higher order functions, i.e. ODEs are defined by a user function and then have to be called from the solver. If the solver code is compiled as a shared library ahead of time, like is commonly done in C++ or Fortran, then there can be a function call overhead that is eliminated by JIT compilation optimizing across the function call barriers (known as interprocedural optimization). This is one way which a JIT system can outperform an AOT (ahead of time) compiled system in real-world code (for completeness, two other ways are by doing full function specialization, which is something that is not generally possible in AOT languages given that you cannot know all types ahead of time for a fully generic function, and calling C itself, i.e. c-ffi (foreign function interface), can be optimized using the runtime information of the JIT compiler to outperform C!).
The other performance difference being shown here is due to optimization of the method. While a slightly different order, we can see a clear difference in the performance of RK4 vs the coefficient optimized methods. It's about the same order of magnitude as "highly optimized code differences", showing that both the Runge-Kutta coefficients and the code implementation can have a significant impact on performance.
Taking a look at what happens when interpreted languages get involved highlights some of the code challenges in this domain. Let's take a look at for example the results when simulating 3 ODE systems with the various RK methods:
We see that using interpreted languages introduces around a 50x-100x performance penalty. If you recall in your previous lecture, the discrete dynamical system that was being simulated was the 3-dimensional Lorenz equation discretized by Euler's method, meaning that the performance of that implementation is a good proxy for understanding the performance differences in this graph. Recall that in previous lectures we saw an approximately 5x performance advantage when specializing on the system function and size and around 10x by reducing allocations: these features account for the performance differences noticed between library implementations, which are then compounded by the use of different RK methods (note that R uses "call by copy" which even further increases the memory usages and makes standard usage of the language incompatible with mutating function calls!).
Stability of a Method
Simply having an order on the truncation error does not imply convergence of the method. The disconnect is that the errors at a given time point may not dissipate. What also needs to be checked is the asymptotic behavior of a disturbance. To see this, one can utilize the linear test problem:
\[ u' = \alpha u \]
and ask the question, does the discrete dynamical system defined by the discretized ODE end up going to zero? You would hope that the discretized dynamical system and the continuous dynamical system have the same properties in this simple case, and this is known as linear stability analysis of the method.
As an example, take a look at the Euler method. Recall that the Euler method was given by:
\[ u_{n+1} = u_n + \Delta t f(u_n,p,t) \]
When we plug in the linear test equation, we get that
\[ u_{n+1} = u_n + \Delta t \alpha u_n \]
If we let $z = \Delta t \alpha$, then we get the following:
\[ u_{n+1} = u_n + z u_n = (1+z)u_n \]
which is stable when $z$ is in the shifted unit circle. This means that, as a necessary condition, the step size $\Delta t$ needs to be small enough that $z$ satisfies this condition, placing a stepsize limit on the method.
If $\Delta t$ is ever too large, it will cause the equation to overshoot zero, which then causes oscillations that spiral out to infinity.
Thus the stability condition places a hard constraint on the allowed $\Delta t$ which will result in a realistic simulation.
For reference, the stability regions of the 2nd and 4th order Runge-Kutta methods that we discussed are as follows:
Interpretation of the Linear Stability Condition
To interpret the linear stability condition, recall that the linearization of a system interprets the dynamics as locally being due to the Jacobian of the system. Thus
\[ u' = f(u,p,t) \]
is locally equivalent to
\[ u' = \frac{df}{du}u \]
You can understand the local behavior through diagonalizing this matrix. Therefore, the scalar for the linear stability analysis is performing an analysis on the eigenvalues of the Jacobian. The method will be stable if the largest eigenvalues of df/du are all within the stability limit. This means that stability effects are different throughout the solution of a nonlinear equation and are generally understood locally (though different more comprehensive stability conditions exist!).
Implicit Methods
If instead of the Euler method we defined $f$ to be evaluated at the future point, we would receive a method like:
\[ u_{n+1} = u_n + \Delta t f(u_{n+1},p,t+\Delta t) \]
in which case, for the stability calculation we would have that
\[ u_{n+1} = u_n + \Delta t \alpha u_n \]
or
\[ (1-z) u_{n+1} = u_n \]
which means that
\[ u_{n+1} = \frac{1}{1-z} u_n \]
which is stable for all $Re(z) < 0$ a property which is known as A-stability. It is also stable as $z \rightarrow \infty$, a property known as L-stability. This means that for equations with very ill-conditioned Jacobians, this method is still able to be use reasonably large stepsizes and can thus be efficient.
Stiffness and Timescale Separation
From this we see that there is a maximal stepsize whenever the eigenvalues of the Jacobian are sufficiently large. It turns out that's not an issue if the phenomena we see are fast, since then the total integration time tends to be small. However, if we have some equations with both fast modes and slow modes, like the Robertson equation, then it is very difficult because in order to resolve the slow dynamics over a long timespan, one needs to ensure that the fast dynamics do not diverge. This is a property known as stiffness. Stiffness can thus be approximated in some sense by the condition number of the Jacobian. The condition number of a matrix is its maximal eigenvalue divided by its minimal eigenvalue and gives a rough measure of the local timescale separations. If this value is large and one wants to resolve the slow dynamics, then explicit integrators, like the explicit Runge-Kutta methods described before, have issues with stability. In this case implicit integrators (or other forms of stabilized stepping) are required in order to efficiently reach the end time step.
Exploiting Continuity
So far, we have looked at ordinary differential equations as a $\Delta t \rightarrow 0$ formulation of a discrete dynamical system. However, continuous dynamics and discrete dynamics have very different characteristics which can be utilized in order to arrive at simpler models and faster computations.
Geometric Properties: No Jumping and the Poincaré–Bendixson theorem
In terms of geometric properties, continuity places a large constraint on the possible dynamics. This is because of the physical constraint on "jumping", i.e. flows of differential equations cannot jump over each other. If you are ever at some point in phase space and $f$ is not explicitly time-dependent, then the direction of $u'$ is uniquely determined (given reasonable assumptions on $f$), meaning that flow lines (solutions to the differential equation) can never cross.
A result from this is the Poincaré–Bendixson theorem, which states that, with any arbitrary (but nice) two dimensional continuous system, you can only have 3 behaviors:
Steady state behavior
Divergence
Periodic orbits
A simple proof by picture shows this.