The Basics of Single Node Parallel Computing

Chris Rackauckas
September 21st, 2020

Youtube Video Link

Moore's law was the idea that computers double in efficiency at fixed time points, leading to exponentially more computing power over time. This was true for a very long time.

However, sometime in the last decade, computer cores have stopped getting faster.

The technology that promises to keep Moore’s Law going after 2013 is known as extreme ultraviolet (EUV) lithography. It uses light to write a pattern into a chemical layer on top of a silicon wafer, which is then chemically etched into the silicon to make chip components. EUV lithography uses very high energy ultraviolet light rays that are closer to X-rays than visible light. That’s attractive because EUV light has a short wavelength—around 13 nanometers—which allows for making smaller details than the 193-nanometer ultraviolet light used in lithography today. But EUV has proved surprisingly difficult to perfect.

-MIT Technology Review

The answer to the “end of Moore's Law” is Parallel Computing. However, programs need to be specifically designed in order to adequately use parallelism. This lecture will describe at a very high level the forms of parallelism and when they are appropriate. We will then proceed to use shared-memory multithreading to parallelize the simulation of the discrete dynamical system.

Managing Threads

Concurrency vs Parallelism and Green Threads

There is a difference between concurrency and parallelism. In a nutshell:

  • Concurrency: Interruptability

  • Parallelism: Independentability

To start thinking about concurrency, we need to distinguish between a process and a thread. A process is discrete running instance of a computer program. It has allocated memory for the program's code, its data, a heap, etc. Each process can have many compute threads. These threads are the unit of execution that needs to be done. On each task is its own stack and a virtual CPU (virtual CPU since it's not the true CPU, since that would require that the task is ON the CPU, which it might not be because the task can be temporarily halted). The kernel of the operating systems then schedules tasks, which runs them. In order to keep the computer running smooth, context switching, i.e. changing the task that is actually running, happens all the time. This is independent of whether tasks are actually scheduled in parallel or not.

Each thread has its own stack associated with it.

This is an important distinction because many tasks may need to run concurrently but without parallelism. Examples of this are input/output (I/O). For example, in a game you may want to be updating the graphics, but the moment a user clicks you want to handle that event. You do not necessarily need to have these running in parallel, but you need the event handling task to be running concurrently to the processing of the game.

Data handling is the key area of scientific computing where green threads (concurrent non-parallel threads) show up. For data handling, one may need to send a signal that causes a message to start being passed. Alternative hardware take over at that point. This alternative hardware is a processor specific for an I/O bus, like the controller for the SSD, modem, GPU, or Infiniband. It will be polled, then it will execute the command, and give the result. There are two variants:

  • Non-Blocking vs Blocking: Whether the thread will periodically poll for whether that task is complete, or whether it should wait for the task to complete before doing anything else

  • Synchronous vs Asynchronous: Whether to execute the operation as initiated by the program or as a response to an event from the kernel.

I/O operations cause a privileged context switch, allowing the task which is handling the I/O to directly be switched to in order to continue actions.

The Main Event Loop

Julia, along with other languages with a runtime (Javascript, Go, etc.) at its core is a single process running an event loop. This event loop is the main thread, and "Julia program" or "script" that one is running is actually ran in a green thread that is controlled by the main event loop. The event loop takes over to look for other work whenever the program hits a yield point. More yield points allows for more aggressive task switching, while it also means more switches to the event loop which suspends the numerical task, i.e. making it slower. Thus yielding shouldn't interrupt the main loop!

This is one area where languages can wildly differ in implementation. Languages structured for lots of I/O and input handling, like Javascript, have yield points at every line (it's an interpreted language and therefore the interpreter can always take control). In Julia, the yield points are minimized. The common yield points are allocations and I/O (println). This means that a tight non-allocating inner loop will not have any yield points and will be a thread that is not interruptible. While this is great for numerical performance, it is something to be aware of.

Side effect: if you run a long tight loop and wish to exit it, you may try Ctrl + C and see that it doesn't work. This is because interrupts are handled by the event loop. The event loop is never re-entered until after your tight numerical loop, and therefore you have the waiting occur. If you hit Ctrl + C multiple times, you will escalate the interruption until the OS takes over and then this is handled by the signal handling of the OS's event loop, which sends a higher level interrupt which Julia handles the moment the safety locks says it's okay (these locks occur during memory allocations to ensure that memory is not corrupted).

Asynchronous Calling Example

This example will become more clear when we get to distributed computing, but for now think of remotecall_fetch as a way to run a command on a different computer. What we want to do is start all of the commands at once, and then wait for all the results before finishing the loop. We will use @async to make the call to remotecall_fetch be non-blocking, i.e. it'll start the job and only poll infrequently to find out when the other machine has completed the job and returned the result. We then add @sync to the loop, which will only continue the loop after all of the green threads have fetched the result. Otherwise, it's possible that a[idx] may not be filled yet, since the thread may not have fetched the result!

@time begin
    a = Vector{Any}(undef,nworkers())
    @sync for (idx, pid) in enumerate(workers())
        @async a[idx] = remotecall_fetch(sleep, pid, 2)
    end
end

The same can be done for writing to the disk. @async is a quick shorthand for spawning a green thread which will handle that I/O operation, and the main event loop will keep switching between them until they are all handled. @sync encodes that the program will not continue until all green threads are handled. This could be done more manually with Task and Channels, which will be something we touch on in the future.

Examples of the Differences

Synchronous = Thread will complete an action

Blocking = Thread will wait until action is completed

  • Asynchronous + Non-Blocking: I/O

  • Asynchronous + Blocking: Threaded atomics (demonstrated next lecture)

  • Synchronous + Blocking: Standard computing, @sync

  • Synchronous + Non-Blocking: Webservers where an I/O operation can be performed, but one never checks if the operation is completed.

Multithreading

If your threads are independent, then it may make sense to run them in parallel. This is the form of parallelism known as multithreading. To understand the data that is available in a multithreaded setup, let's look at the picture of threads again:

Each thread has its own call stack, but it's the process that holds the heap. This means that dynamically-sized heap allocated objects are shared between threads with no cost, a setup known as shared-memory computing.

Loop-Based Multithreading with @threads

Let's look back at our Lorenz dynamical system from before:

using StaticArrays, BenchmarkTools
function lorenz(u,p)
  α,σ,ρ,β = p
  @inbounds begin
    du1 = u[1] + α*(σ*(u[2]-u[1]))
    du2 = u[2] + α*(u[1]*(ρ-u[3]) - u[2])
    du3 = u[3] + α*(u[1]*u[2] - β*u[3])
  end
  @SVector [du1,du2,du3]
end
function solve_system_save!(u,f,u0,p,n)
  @inbounds u[1] = u0
  @inbounds for i in 1:length(u)-1
    u[i+1] = f(u[i],p)
  end
  u
end
p = (0.02,10.0,28.0,8/3)
u = Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000)
@btime solve_system_save!(u,lorenz,@SVector([1.0,0.0,0.0]),p,1000)
6.508 μs (0 allocations: 0 bytes)
1000-element Vector{SVector{3, Float64}}:
 [1.0, 0.0, 0.0]
 [0.8, 0.56, 0.0]
 [0.752, 0.9968000000000001, 0.008960000000000001]
 [0.80096, 1.3978492416000001, 0.023474005333333336]
 [0.92033784832, 1.8180538219817644, 0.04461448495326095]
 [1.099881043052353, 2.296260732619613, 0.07569952060880669]
 [1.339156980965805, 2.864603692722823, 0.12217448583728006]
 [1.6442463233172087, 3.5539673118971193, 0.19238159391549564]
 [2.026190521033191, 4.397339452147425, 0.2989931959555302]
 [2.5004203072560376, 5.431943011293093, 0.4612438424853632]
 ⋮
 [6.8089180814322185, 0.8987564841782779, 31.6759436385101]
 [5.6268857619814305, 0.3801973723631693, 30.108951163308078]
 [4.577548084057778, 0.13525687944525802, 28.545926978224173]
 [3.6890898431352737, 0.08257160199224252, 27.035860436772758]
 [2.9677861949066675, 0.15205611935372762, 25.600040161309696]
 [2.4046401797960795, 0.2914663505185634, 24.24373008707723]
 [1.9820054139405763, 0.46628657468365653, 22.964748583050085]
 [1.6788616460891923, 0.6565587545689172, 21.758445642263496]
 [1.4744010677851374, 0.8530017039412324, 20.62004063423844]

In order to use multithreading on this code, we need to take a look at the dependency graph and see what items can be calculated independently of each other. Notice that

σ*(u[2]-u[1])
ρ-u[3]
u[1]*u[2]
β*u[3]

are all independent operations, so in theory we could split those off to different threads, move up, etc.

Or we can have three threads:

u[1] + α*(σ*(u[2]-u[1]))
u[2] + α*(u[1]*(ρ-u[3]) - u[2])
u[3] + α*(u[1]*u[2] - β*u[3])

all don't depend on the output of each other, so these tasks can be run in parallel. We can do this by using Julia's Threads.@threads macro which puts each of the computations of a loop in a different thread. The threaded loops do not allow you to return a value, so how do you build up the values for the @SVector?

...?

...?

...?

It's not possible! To understand why, let's look at the picture again:

There is a shared heap, but the stacks are thread local. This means that a value cannot be stack allocated in one thread and magically appear when re-entering the main thread: it needs to go on the heap somewhere. But if it needs to go onto the heap, then it makes sense for us to have preallocated its location. But if we want to preallocate du[1], du[2], and du[3], then it makes sense to use the fully non-allocating update form:

function lorenz!(du,u,p)
  α,σ,ρ,β = p
  @inbounds begin
    du[1] = u[1] + α*(σ*(u[2]-u[1]))
    du[2] = u[2] + α*(u[1]*(ρ-u[3]) - u[2])
    du[3] = u[3] + α*(u[1]*u[2] - β*u[3])
  end
end
function solve_system_save_iip!(u,f,u0,p,n)
  @inbounds u[1] = u0
  @inbounds for i in 1:length(u)-1
    f(u[i+1],u[i],p)
  end
  u
end
p = (0.02,10.0,28.0,8/3)
u = [Vector{Float64}(undef,3) for i in 1:1000]
@btime solve_system_save_iip!(u,lorenz!,[1.0,0.0,0.0],p,1000)
7.301 μs (2 allocations: 80 bytes)
1000-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.8, 0.56, 0.0]
 [0.752, 0.9968000000000001, 0.008960000000000001]
 [0.80096, 1.3978492416000001, 0.023474005333333336]
 [0.92033784832, 1.8180538219817644, 0.04461448495326095]
 [1.099881043052353, 2.296260732619613, 0.07569952060880669]
 [1.339156980965805, 2.864603692722823, 0.12217448583728006]
 [1.6442463233172087, 3.5539673118971193, 0.19238159391549564]
 [2.026190521033191, 4.397339452147425, 0.2989931959555302]
 [2.5004203072560376, 5.431943011293093, 0.4612438424853632]
 ⋮
 [6.8089180814322185, 0.8987564841782779, 31.6759436385101]
 [5.6268857619814305, 0.3801973723631693, 30.108951163308078]
 [4.577548084057778, 0.13525687944525802, 28.545926978224173]
 [3.6890898431352737, 0.08257160199224252, 27.035860436772758]
 [2.9677861949066675, 0.15205611935372762, 25.600040161309696]
 [2.4046401797960795, 0.2914663505185634, 24.24373008707723]
 [1.9820054139405763, 0.46628657468365653, 22.964748583050085]
 [1.6788616460891923, 0.6565587545689172, 21.758445642263496]
 [1.4744010677851374, 0.8530017039412324, 20.62004063423844]

and now we multithread:

using Base.Threads
function lorenz_mt!(du,u,p)
  α,σ,ρ,β = p
  let du=du, u=u, p=p
    Threads.@threads for i in 1:3
      @inbounds begin
        if i == 1
          du[1] = u[1] + α*(σ*(u[2]-u[1]))
        elseif i == 2
          du[2] = u[2] + α*(u[1]*(ρ-u[3]) - u[2])
        else
          du[3] = u[3] + α*(u[1]*u[2] - β*u[3])
        end
        nothing
      end
    end
  end
  nothing
end
function solve_system_save_iip!(u,f,u0,p,n)
  @inbounds u[1] = u0
  @inbounds for i in 1:length(u)-1
    f(u[i+1],u[i],p)
  end
  u
end
p = (0.02,10.0,28.0,8/3)
u = [Vector{Float64}(undef,3) for i in 1:1000]
@btime solve_system_save_iip!(u,lorenz_mt!,[1.0,0.0,0.0],p,1000);
1.190 ms (6995 allocations: 608.84 KiB)

Parallelism doesn't always make things faster. There are two costs associated with this code. For one, we had to go to the slower heap+mutation version, so its implementation starting point is slower. But secondly, and more importantly, the cost of spinning a new thread is non-negligible. In fact, here we can see that it even needs to make a small allocation for the new context. The total cost is on the order of It's on the order of 50ns: not huge, but something to take note of. So what we've done is taken almost free calculations and made them ~50ns by making each in a different thread, instead of just having it be one thread with one call stack.

The moral of the story is that you need to make sure that there's enough work per thread in order to effectively accelerate a program with parallelism.

Data-Parallel Problems

So not every setup is amenable to parallelism. Dynamical systems are notorious for being quite difficult to parallelize because the dependency of the future time step on the previous time step is clear, meaning that one cannot easily "parallelize through time" (though it is possible, which we will study later).

However, one common way that these systems are generally parallelized is in their inputs. The following questions allow for independent simulations:

  • What steady state does an input u0 go to for some list/region of initial conditions?

  • How does the solution very when I use a different p?

The problem has a few descriptions. For one, it's called an embarrassingly parallel problem since the problem can remain largely intact to solve the parallelism problem. To solve this, we can use the exact same solve_system_save_iip!, and just change how we are calling it. Secondly, this is called a data parallel problem, since it parallelized by splitting up the input data (here, the possible u0 or ps) and acting on them independently.

Multithreaded Parameter Searches

Now let's multithread our parameter search. Let's say we wanted to compute the mean of the values in the trajectory. For a single input pair, we can compute that like:

using Statistics
function compute_trajectory_mean(u0,p)
  u = Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000)
  solve_system_save!(u,lorenz,u0,p,1000);
  mean(u)
end
@btime compute_trajectory_mean(@SVector([1.0,0.0,0.0]),p)
7.659 μs (4 allocations: 23.53 KiB)
3-element SVector{3, Float64} with indices SOneTo(3):
 -0.31149962346484683
 -0.3097490174897651
 26.024603558583014

We can make this faster by preallocating the cache vector u. For example, we can globalize it:

u = Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000)
function compute_trajectory_mean2(u0,p)
  # u is automatically captured
  solve_system_save!(u,lorenz,u0,p,1000);
  mean(u)
end
@btime compute_trajectory_mean2(@SVector([1.0,0.0,0.0]),p)
7.476 μs (3 allocations: 112 bytes)
3-element SVector{3, Float64} with indices SOneTo(3):
 -0.31149962346484683
 -0.3097490174897651
 26.024603558583014

But this is still allocating? The issue with this code is that u is a global, and captured globals cannot be inferred because their type can change at any time. Thus what we can do instead is capture a constant:

const _u_cache = Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000)
function compute_trajectory_mean3(u0,p)
  # u is automatically captured
  solve_system_save!(_u_cache,lorenz,u0,p,1000);
  mean(_u_cache)
end
@btime compute_trajectory_mean3(@SVector([1.0,0.0,0.0]),p)
7.431 μs (1 allocation: 32 bytes)
3-element SVector{3, Float64} with indices SOneTo(3):
 -0.31149962346484683
 -0.3097490174897651
 26.024603558583014

Now it's just allocating the output. The other way to do this is to use a closure which encapsulates the cache data:

function _compute_trajectory_mean4(u,u0,p)
  solve_system_save!(u,lorenz,u0,p,1000);
  mean(u)
end
compute_trajectory_mean4(u0,p) = _compute_trajectory_mean4(_u_cache,u0,p)
@btime compute_trajectory_mean4(@SVector([1.0,0.0,0.0]),p)
7.431 μs (1 allocation: 32 bytes)
3-element SVector{3, Float64} with indices SOneTo(3):
 -0.31149962346484683
 -0.3097490174897651
 26.024603558583014

This is the same, but a bit more explicit. Now let's create our parameter search function. Let's take a sample of parameters:

ps = [(0.02,10.0,28.0,8/3) .* (1.0,rand(3)...) for i in 1:1000]
1000-element Vector{NTuple{4, Float64}}:
 (0.02, 4.97814554085993, 3.0509826265449287, 0.547748465209219)
 (0.02, 2.3481922710537906, 26.618941576632377, 0.48775109837594705)
 (0.02, 3.0128075972250743, 13.976181551940215, 2.1093994354690238)
 (0.02, 2.0580298654836717, 7.1842692548666225, 1.9816992530193005)
 (0.02, 5.856944435480868, 27.963336127246873, 0.6260041551315467)
 (0.02, 3.4057500470185276, 9.246506553268166, 0.07264955040584375)
 (0.02, 1.2574324910670565, 27.252675521233666, 2.0596814750889765)
 (0.02, 9.316184741278073, 3.1692082554029763, 1.6978123005561567)
 (0.02, 0.8490637033757953, 17.247281135167995, 1.3244568453161165)
 (0.02, 4.530362736444368, 15.080387400498527, 2.1020159023914085)
 ⋮
 (0.02, 9.59149505427449, 0.560719391331824, 2.400208895015942)
 (0.02, 0.2811724385140657, 21.54914421927745, 2.558744484496856)
 (0.02, 0.406215755409568, 6.115205664051439, 0.7911689737502032)
 (0.02, 1.4430379171876306, 10.619878480797642, 2.17698560056085)
 (0.02, 3.466705220125459, 3.0446904943608457, 2.5603670966057868)
 (0.02, 1.8740357855934664, 14.877767950685566, 0.6673889991104274)
 (0.02, 4.636916510192136, 21.249458507088857, 0.474467494937592)
 (0.02, 7.393829830218469, 15.074219914551499, 2.5510975303416004)
 (0.02, 8.346803792591247, 4.92185668072168, 2.0222864548580857)

And let's get the mean of the trajectory for each of the parameters.

serial_out = map(p -> compute_trajectory_mean4(@SVector([1.0,0.0,0.0]),p),ps)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]

Now let's do this with multithreading:

function tmap(f,ps)
  out = Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000)
  Threads.@threads for i in 1:1000
    # each loop part is using a different part of the data
    out[i] = f(ps[i])
  end
  out
end
threaded_out = tmap(p -> compute_trajectory_mean4(@SVector([1.0,0.0,0.0]),p),ps)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]

Let's check the output:

serial_out - threaded_out
1000-element Vector{SVector{3, Float64}}:
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 ⋮
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]

Oh no, we don't get the same answer! What happened?

The answer is the caching. Every single thread is using _u_cache as the cache, and so while one is writing into it the other is reading out of it, and thus is getting the value written to it from the wrong cache!

To fix this, what we need is a different heap per thread:

const _u_cache_threads = [Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000) for i in 1:Threads.nthreads()]
function compute_trajectory_mean5(u0,p)
  # u is automatically captured
  solve_system_save!(_u_cache_threads[Threads.threadid()],lorenz,u0,p,1000);
  mean(_u_cache_threads[Threads.threadid()])
end
@btime compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p)
7.434 μs (1 allocation: 32 bytes)
3-element SVector{3, Float64} with indices SOneTo(3):
 -0.31149962346484683
 -0.3097490174897651
 26.024603558583014
serial_out = map(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
threaded_out = tmap(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
serial_out - threaded_out
1000-element Vector{SVector{3, Float64}}:
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 ⋮
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0]
@btime serial_out = map(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
7.422 ms (4 allocations: 23.52 KiB)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]
@btime threaded_out = tmap(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
7.424 ms (10 allocations: 24.08 KiB)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]

Hierarchical Task-Based Multithreading and Dynamic Scheduling

The major change in Julia v1.3 is that Julia's Tasks, which are traditionally its green threads interface, are now the basis of its multithreading infrastructure. This means that all independent threads are parallelized, and a new interface for multithreading will exist that works by spawning threads.

This implementation follows Go's goroutines and the classic multithreading interface of Cilk. There is a Julia-level scheduler that handles the multithreading to put different tasks on different vCPU threads. A benefit from this is hierarchical multithreading. Since Julia's tasks can spawn tasks, what can happen is a task can create tasks which create tasks which etc. In Julia (/Go/Cilk), this is then seen as a single pool of tasks which it can schedule, and thus it will still make sure only N are running at a time (as opposed to the naive implementation where the total number of running threads is equal then multiplied). This is essential for numerical performance because running multiple compute threads on a single CPU thread requires constant context switching between the threads, which will slow down the computations.

To directly use the task-based interface, simply use Threads.@spawn to spawn new tasks. For example:

function tmap2(f,ps)
  tasks = [Threads.@spawn f(ps[i]) for i in 1:1000]
  out = [fetch(t) for t in tasks]
end
threaded_out = tmap2(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]

However, if we check the timing we see:

@btime tmap2(p -> compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p),ps)
7.764 ms (6008 allocations: 562.66 KiB)
1000-element Vector{SVector{3, Float64}}:
 [1.035071646185291, 1.03573598336347, 1.934266120850417]
 [-1.534259528073888, -1.5907629677840462, 25.11546333894921]
 [-4.085512561812751, -4.177443014633625, 11.420858143242054]
 [3.4146242432432508, 3.4753813091627004, 5.928601350889406]
 [-0.4907164522119372, -0.5046733896601684, 27.477917336005774]
 [0.16798939417274283, 0.1511702919798561, 8.449627095546889]
 [7.16782645134089, 7.420377924276856, 25.308574634203886]
 [1.851322563803956, 1.8562553601189185, 2.0268571968896443]
 [4.538173577353815, 4.752462392175486, 15.682918913397922]
 [-1.1168667539878763, -1.1351904799995318, 11.580503592960964]
 ⋮
 [0.011848186204016094, 0.006635310948318379, 0.0001254248087272685]
 [6.709343869566742, 7.820963084660362, 19.64272940567095]
 [1.9703232525196908, 2.0948521632799415, 4.880419591590412]
 [4.46356076121234, 4.587475662106131, 9.27201711593353]
 [2.191528211168287, 2.210105603823245, 1.9019321333975885]
 [-2.401755053453972, -2.523158374908053, 13.232913314549949]
 [0.12140703648492077, 0.08658448900088872, 20.09079005066839]
 [-0.7693999360740188, -0.7598938894040288, 11.96688724936337]
 [2.741292986828319, 2.7521726702682976, 3.742853825477588]

Threads.@threads is built on the same multithreading infrastructure, so why is this so much slower? The reason is because Threads.@threads employs static scheduling while Threads.@spawn is using dynamic scheduling. Dynamic scheduling is the model of allowing the runtime to determine the ordering and scheduling of processes, i.e. what tasks will run run where and when. Julia's task-based multithreading system has a thread scheduler which will automatically do this for you in the background, but because this is done at runtime it will have overhead. Static scheduling is the model of pre-determining where and when tasks will run, instead of allowing this to be determined at runtime. Threads.@threads is "quasi-static" in the sense that it cuts the loop so that it spawns only as many tasks as there are threads, essentially assigning one thread for even chunks of the input data.

Does this lack of runtime overhead mean that static scheduling is "better"? No, it simply has trade-offs. Static scheduling assumes that the runtime of each block is the same. For this specific case where there are fixed number of loop iterations for the dynamical systems, we know that every compute_trajectory_mean5 costs exactly the same, and thus this will be more efficient. However, There are many cases where this might not be efficient. For example:

function sleepmap_static()
  out = Vector{Int}(undef,24)
  Threads.@threads for i in 1:24
    sleep(i/10)
    out[i] = i
  end
  out
end
isleep(i) = (sleep(i/10);i)
function sleepmap_spawn()
  tasks = [Threads.@spawn(isleep(i)) for i in 1:24]
  out = [fetch(t) for t in tasks]
end

@btime sleepmap_static()
@btime sleepmap_spawn()
30.056 s (106 allocations: 3.73 KiB)
  2.400 s (222 allocations: 14.78 KiB)
24-element Vector{Int64}:
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
  ⋮
 16
 17
 18
 19
 20
 21
 22
 23
 24

The reason why this occurs is because of how the static scheduling had chunked my calculation. On my computer:

Threads.nthreads()
1

This means that there are 6 tasks that are created by Threads.@threads. The first takes:

sum(i/10 for i in 1:4)
1.0

1 second, while the next group takes longer, then the next, etc. while the last takes:

sum(i/10 for i in 21:24)
9.0

9 seconds (which is precisely the result!). Thus by unevenly distributing the runtime, we run as fast as the slowest thread. However, dynamic scheduling allows new tasks to immediately run when another is finished, meaning that the in that case the shorter tasks tend to be piled together, causing a faster execution. Thus whether dynamic or static scheduling is beneficial is dependent on the problem and the implementation of the static schedule.

Possible Project

Note that this can extend to external library calls as well. FFTW.jl recently gained support for this. A possible final project would be to do a similar change to OpenBLAS.

A Teaser for Alternative Parallelism Models

Simplest Parallel Code

A = rand(10000,10000)
B = rand(10000,10000)
A*B
10000×10000 Matrix{Float64}:
 2520.43  2509.93  2530.56  2479.45  …  2525.76  2492.29  2505.22  2516.22
 2516.91  2498.48  2516.01  2442.42     2485.46  2483.21  2457.58  2506.24
 2542.6   2525.06  2541.04  2503.69     2534.01  2498.83  2502.14  2535.49
 2498.75  2501.59  2514.22  2455.37     2508.07  2473.83  2457.19  2508.37
 2480.28  2464.52  2473.04  2425.29     2473.97  2442.74  2422.0   2476.24
 2516.95  2513.74  2518.48  2471.68  …  2510.43  2481.46  2487.73  2504.29
 2517.9   2506.88  2513.57  2462.13     2525.29  2469.06  2484.98  2506.58
 2533.06  2520.76  2516.27  2469.17     2517.13  2466.79  2481.28  2515.57
 2541.67  2523.58  2534.16  2498.13     2545.4   2518.1   2497.06  2530.8
 2521.22  2504.13  2511.01  2460.54     2513.55  2467.61  2481.83  2513.73
    ⋮                                ⋱                             
 2526.14  2503.73  2496.28  2456.94     2504.17  2469.85  2478.53  2519.74
 2508.51  2503.86  2526.15  2458.95     2517.69  2488.93  2499.91  2524.48
 2481.56  2472.37  2509.8   2451.54     2486.58  2456.1   2451.67  2488.04
 2521.09  2512.75  2528.48  2466.56     2530.12  2494.71  2486.07  2520.36
 2515.54  2517.56  2529.19  2465.65  …  2524.21  2486.17  2490.23  2521.83
 2531.36  2518.57  2526.09  2472.48     2515.84  2477.56  2480.28  2513.34
 2506.8   2479.99  2509.88  2450.32     2507.76  2497.35  2476.32  2498.9
 2514.74  2503.63  2507.47  2446.72     2516.97  2475.97  2466.52  2495.77
 2516.54  2513.72  2523.2   2466.02     2521.23  2479.37  2473.83  2504.91

If you are using a computer that has N cores, then this will use N cores. Try it and look at your resource usage!

Array-Based Parallelism

The simplest form of parallelism is array-based parallelism. The idea is that you use some construction of an array whose operations are already designed to be parallel under the hood. In Julia, some examples of this are:

  • DistributedArrays (Distributed Computing)

  • Elemental

  • MPIArrays

  • CuArrays (GPUs)

This is not a Julia specific idea either.

BLAS and Standard Libraries

The basic linear algebra calls are all handled by a set of libraries which follow the same interface known as BLAS (Basic Linear Algebra Subroutines). It's divided into 3 portions:

  • BLAS1: Element-wise operations (O(n))

  • BLAS2: Matrix-vector operations (O(n^2))

  • BLAS3: Matrix-matrix operations (O(n^3))

BLAS implementations are highly optimized, like OpenBLAS and Intel MKL, so every numerical language and library essentially uses similar underlying BLAS implementations. Extensions to these, known as LAPACK, include operations like factorizations, and are included in these standard libraries. These are all multithreaded. The reason why this is a location to target is because the operation count is high enough that parallelism can be made efficient even when only targeting this level: a matrix multiplication can take on the order of seconds, minutes, hours, or even days, and these are all highly parallel operations. This means you can get away with a bunch just by parallelizing at this level, which happens to be a bottleneck for a lot scientific computing codes.

This is also commonly the level at which GPU computing occurs in machine learning libraries for reasons which we will explain later.

MPI

Well, this is a big topic and we'll address this one later!

Conclusion

The easiest forms of parallelism are:

  • Embarrassingly parallel

  • Array-level parallelism (built into linear algebra)

Exploit these when possible.