JuliaCon 2020 | Auto-Optimization and Parallelism in DifferentialEquations.jl | Chris Rackauckas

Описание к видео JuliaCon 2020 | Auto-Optimization and Parallelism in DifferentialEquations.jl | Chris Rackauckas

You might not know all of the latest methods in differential equations, all of the best knobs to tweak,
how to properly handle sparsity, or how to parallelize your code. Or you might just write bad code. Don't you wish someone would just fix that for you automatically? It turns out that the latest feature of DifferentialEquations.jl, autooptimize, can do just that. This talk is both a demo of this cool new feature and a description of how it was created for other package authors to copy.

A general compiler can only have so much knowledge, but when we know that someone is solving a differential equation, there are a million things that we know. We know that different sizes of differential equations will do better/worse with different solver methods, we know that sparsity of the Jacobian will have a large impact on the speed of computation, we know that the user's f function describing the ODE can be considered independently from the rest of the program, and so on. In DifferentialEquations.jl, we have codified these ideas in order to build a toolchain that automatically optimizes a user's f function in order to spit out a more optimized DEProblem.

This works by first tracing to a symbolic sublanguage, ModelingToolkit.jl. By using tasks to time-out, we can try performing an auto-trace which, if successful, gives us a complete symbolic mathematical description of the user's numerical code. We can then proceed to symbolically analyze the function to generate the analytical solution to the user's Jacobian and even symbolically factorize the Jacobian, if doable in the allotted time. From the symbolic world we can then auto-parallelize the generated Julia code, chunking the output into tasks to multithread, or using a cost model determine that the ODE is large enough to automatically distribute (with auto-GPU coming soon).

If the system is not symbolically trace-able (there is a while loop depending on an input value, something that is quite uncommon), then we can resort to IR-based and adaptive analysis. We will demonstrate how SparsityDetection.jl can automatically identify the sparsity pattern of the Jacobian for a Julia code and then use SparseDiffTools.jl to accelerate the solve of stiff equations by performing a matrix coloring and optimizing the Jacobian construction for the problem. We will then discuss how DifferentialEquations.jl automatically picks the solver algorithm, defaulting to methods which can automatically switch between stiff and non-stiff integrators, determining stiffness on the fly with heuristics.

Together, we have demonstrated that these auto-optimizations can improve the code of even experienced Julia programmers by over 100x by enabling sparsity coloring optimizations that they may not have known about, and by parallelizing code that is either difficult to parallelize or is simply automatically generated and thus hard to intervene with.

https://figshare.com/articles/present...
https://sciml.ai/ TimeStamps:

00:00 Welcome!
00:10 Help us add time stamps or captions to this video! See the description for details.

Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/JuliaCommunity/You...

Interested in improving the auto generated captions? Get involved here: https://github.com/JuliaCommunity/You...

Комментарии

Информация по комментариям в разработке