2 min read

Yesterday, the team behind JuliaDiffeEq released DifferentialEquations.jl v6.4.0,  a suite for numerically solving differential equations in Julia. This release gives users the ability to use ODE solvers on GPU, with automated tooling for faster broadcast, matrix-free Newton-Krylov, better Jacobian re-use algorithms, memory use reduction, etc.

What’s new in DifferentialEquations.jl v6.4.0?

Full GPU support in ODE solvers

With this release, the stiff ODE solvers allow expensive calculations, like those in neural ODEs or PDE discretizations, and utilize GPU acceleration. This release also allows the initial condition to be a GPUArray where the internal methods don’t perform any indexing in order to allow for all computations to take place on the GPU without data transfers.

Fast DiffEq-Specific Broadcast

This release comes with a broadcast wrapper that allows all sorts of information to be passed to the compiler in the differential equation solver’s internals. This makes a bunch of no-aliasing and sizing assumptions that are normally not possible. This leads the internals to use a special @..,which also turns out to be faster than standard loops.

Smart linsolve defaults

This release comes with a smarter linsolve defaults, which automatically detects the BLAS installation and utilizes RecursiveFactorizations.jl that speeds up the process for ODE.

Users can use the linear solver to automatically switch to a form that works for sparse Jacobians. Even banded matrices and Jacobians on the GPU are now automatically handled.

Automated J*v Products via Autodifferentiation

Users can now use GMRES, easily without the need for constructing the full Jacobian matrix. Users can simply use the directional derivatives in the direction of v in order to compute J*v.

Performance improvement

With this release, the performance of all implicit methods like KenCarp4 has been improved. DiffEqBiological.jl can now handle large reaction networks and can parse the networks much faster and can build Jacobians that utilize sparse matrices. Though there is still plenty of room for improvement.

Partial Neural ODEs

This release comes with a lot of improvements and gives a glimpse of working examples of partial neural differential equations that are equations, which have pre-specified portions. These equations allow for batched data and GPU acceleration.

Memory optimization 

This release comes with memory optimizations of low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs. These methods now have a minimal number of registers which are required for the method. Large PDE discretizations can now make use of DifferentialEquations.jl without loss of memory efficiency.

Robust callbacks

The team has introduced the ContinuousCallback implementation in this release that has increased robustness in double event detection.

To know more about this news, check out the official announcement.

Read Next

The solvers – these great unknown

Moving Further with NumPy Modules

How to build an options trading web app using Q-learning