4 min read

Yesterday, Julia team announced the alpha release of v1.3.0, which is an early preview of Julia version 1.3.0, expected to be out in a couple of months. The alpha release includes a preview of a new threading interface for Julia programs called multi-threaded task parallelism.

The task parallelism model allows many programs to be marked in parallel for execution, where a ‘task’ will run all the codes simultaneously on the available thread. This functionality works similar to a GC model (garbage collection) as users can freely release millions of tasks and not worry about how the libraries are implemented. This portable model has been included over all the Julia packages.

Read Also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial]

Jeff Bezanson and Jameson Nash from Julia Computing, and Kiran Pamnany from Intel say the Julia task parallelism is “inspired by parallel programming systems like Cilk, Intel Threading Building Blocks(TBB) and Go.

With multi-threaded task parallelism, Julia model can schedule many parallel tasks that call library functions. This works smoothly as the CPUs are not overcrowded with threads. This acts as an important feature for high-level languages as they require library functions frequently.

How to resolve challenges while implementing task parallelism

Allocating and switching task stacks

Each task requires its own execution stack distinct from the usual process or thread stacks provided by Unix operating systems. Julia has an alternate implementation of stack switching which trades time for memory when a task switches. However, it may not be compatible with foreign code that uses cfunction. This implementation is used when stacks consume large address space.

Event loop thread issues an async signal

If a thread needs an event loop thread to wake up, it issues an async signal. This may be due to another thread scheduling new work, or a thread which is beginning to run garbage collection, or a thread which wants to take the I/O lock to perform I/O.

Task migration across system threads

In general, a task may start running on one thread, block for a while, and then restart on another thread. Julia uses thread-local variables every time a memory is allocated internally. Currently, a task always runs on the thread it started running on initially. To support this, Julia is using the concept of a sticky task where a task must run on a given thread and per-thread queues for running tasks associated with each thread.

Sleeping idle threads

To avoid 100% usage of CPUs all the time, some tasks are made to sleep. This can lead to a synchronization problem as some threads might be scheduled for new work while others have been kept on sleep.

Dedicated scheduler task cause overhead problem

When a task is blocked, the scheduler is called to pick another task to run. But, on what stack does the code run? It is possible to have a dedicated scheduler task; however, it may cause less overhead if the scheduler code runs in the context of the recently-blocked task. One suggested measure is to pull a task out of the scheduler queue to avoid switch away.

Classic bugs

The Julia team faced many difficult bugs while implementing multi-threaded functionality. One of the many bug was a mysterious one on Windows which got fixed by flipping a single bit.

Future goals for Julia version 1.3.0

  • increase performance work on task switch and the I/O latency
  • allow task migration
  • use multiple threads in the compiler
  • improved debugging tools
  • provide alternate schedulers

Developers are impressed with the new multithreaded parallelism functionality.

A user on Hacker News comments “Great to see this finally land – thanks for all the team’s work. Looking forward to giving it a whirl. Threading is something of a prerequisite for acceptance as a serious language among many folks. So great to not just check that box, but to stick the pen right through it. The devil is always in the details, but from the doc the interface looks pretty nice.”

Another user says, “This is huge! I was testing out the master branch a few days ago and the parallelism improvements were amazing.”

Many users are expecting Julia to challenge Python in the future.

A comment on Hacker News reads “Not only is this huge for Julia, but they’ve just thrown down the gauntlet. The status quo has been upset. I expect Julia to start eating everyone’s lunch starting with Python. Every language can use good concurrency & parallelism support and this is the biggest news for all dynamic languages.”

Another user says, “I worked in a computational biophysics department with lots of python/bash/R and I was the only one who wrote lots of high-performance code in Julia. People were curious about the language but it was still very much unknown. Hope we will see a broader adoption of Julia in the future – it’s just that it is much better for the stuff we do on a daily basis.”

To learn how to implement Julia using task parallelism, head over to Julia blog.

Read Next

Mozilla is funding a project for bringing Julia to Firefox and the general browser environment

Announcing Julia v1.1 with better exception handling and other improvements

Julia for machine learning. Will the new language pick up pace?

A born storyteller turned writer!