Parallel distributed-memory version of Gridap.jl
.
GridapDistributed.jl
provides fully-parallel distributed memory data structures for the Finite Element (FE) numerical solution of Partial Differential Equations (PDEs) on parallel computers, from multi-core CPU desktop computers, to HPC clusters and supercomputers. These distributed data structures are designed to mirror as far as possible their counterparts in the Gridap.jl
software package, while implementing/leveraging most of their abstract interfaces. As a result, sequential Julia scripts written in the high level API of Gridap.jl
can be used almost verbatim up to minor adjustments in a parallel context using GridapDistributed.jl
. This is indeed one of the main advantages of GridapDistributed.jl
and a major design goal that we pursue.
At present, GridapDistributed.jl
provides scalable parallel data structures for grid handling, finite element spaces setup, and distributed linear system assembly. For the latter part, i.e., global distributed sparse matrices and vectors, GridapDistributed.jl
relies on PartitionedArrays.jl
as distributed linear algebra backend. This implies, among others, that all GridapDistributed.jl
driver programs can be either run in sequential execution mode--very useful for developing/debugging parallel programs--, see test/sequential/
folder for examples, or in message-passing (MPI) execution mode--when you want to deploy the code in the actual parallel computer and perform a fast simulation., see test/mpi/
folder for examples.
GridapDistributed.jl
is not a parallel mesh generator. Grid handling currently available withinGridapDistributed.jl
is restricted to Cartesian-like meshes of arbitrary-dimensional, topologically n-cube domains. SeeGridapP4est.jl
, for peta-scale handling of meshes which can be decomposed as forest of quadtrees/octrees of the computational domain, andGridapGmsh.jl
for unstrucuted mesh generation.GridapDistributed.jl
is not a library of parallel linear solvers at this moment. The linear solver kernel withinGridapDistributed.jl
, leveraged, e.g., via the backslash operator\
, is just a sparse LU solver applied to the global system gathered on a master task (and thus obviously not scalable, but very useful for testing and debug purposes). It is in our future plans to provide highly scalable linear and nonlinear solvers tailored for the FE discretization of PDEs. For the moment, seeGridapPETSc.jl
to use the full set of scalable linear and non-linear solvers in the PETSc numerical software package.
Before using GridapDistributed.jl
package, one needs to build the MPI.jl
package. We refer to the main documentation of this package for configuration instructions.
In order to execute a MPI-parallel GridapDistributed.jl
driver, we can leverage the mpiexecjl
script provided by MPI.jl
. (Click here for installation instructions). As an example, assuming that we are located on the root directory of GridapDistributed.jl
,
an hypothetic MPI-parallel GridapDistributed.jl
driver named driver.jl
can be executed on 4 MPI tasks as:
mpiexecjl --project=. -n 4 julia -J sys-image.so driver.jl
where -J sys-image.so
is optional, but highly recommended in order to reduce JIT compilation times. Here, sys-image.so
is assumed to be a Julia system image pre-generated for the driver at hand using the PackageCompiler.jl
package. See the test/TestApp/compile
folder for example scripts with system image generation along with a test application with source available at test/TestApp/
. These scripts are triggered from .github/workflows/ci.yml
file on Github CI actions.
A warning when executing MPI-parallel drivers: Data race conditions in the generation of precompiled modules in cache. See here.