
Unified long-range electrostatics and dynamic protonation for realistic biomolecular simulations on the Exascale
In this DFG supported project we target a flexible, portable and scalable solver for potentials and forces, which is a prerequisite for exascale applications in particle-based simulations with long-range interactions in general. As a particularly challenging example that will prove and demonstrate the capability of our concepts, we use the popular molecular dynamics (MD) simulation software GROMACS. MD simulation has become a crucial tool to the scientific community, especially as it probes time- and length scales difficult or impossible to probe experimentally. Moreover, it is a prototypic example of a general class of complex multiparticle systems with long-range interactions.
MD simulations elucidate detailed, time-resolved behaviour of biology’s nanomachines. From a computational point of view, they are extremely challenging for two main reasons. First, to properly describe the functional motions of biomolecules, the long-range effects of the electrostatic interactions must be explicitly accounted for. Therefore, techniques like the particle-mesh Ewald method were adopted, which, however, severely limits the scaling to a large number of cores due to global communication requirements. The second challenge is to realistically describe the time-dependent location of (partial) charges, as e.g. the protonation states of the molecules depend on their time-dependent electrostatic environment. Here we address both tighly interlinked challenges by the development, implementation, and optimization of a unified algorithm for long-range interactions that will account for realistic, dynamic protonation states and at the same time overcome current scaling limitations.
Download and test our GPU-FMM for GROMACS
If you want to give our GPU-FMM a test drive, please download the tar archive below, unpack with tar -xvzf
, and install just like a usual GROMACS 2019.
Our CUDA FMM can be used as a PME replacement by choosing coulombtype = FMM
in the .mdp
input parameter list. The tree depth d and the multipole order p are set with fmm-override-tree-depth
and fmm-override-multipole-order
input parameters, respectively. On request (provide your ssh key), the code can be checked out from our git repository git@fmsolvr.fz-juelich.de:gromacs
.
GROMACS with GPU-FMM including benchmark systems
- GROMACS 2019 with CUDA FMM source code 66.64 MB
- GROMACS input files for salt water system 1.09 MB
- GROMACS input files for multi-droplet (aerosol) system 1.34 MB
- Multi-droplet (aerosol) benchmark with FMM electrostatics .tpr 3.05 MB
- Multi-droplet (aerosol) benchmark with PME electrostatics .tpr 1.97 MB
- runfmm.py 1.85 kB
For running the GPU FMM benchmarks, you need to set the following environment variable:
export GMX_USE_GPU_BUFFER_OPS=1
With sparse systems as the aerosol system, you should additionally set
export FMM_SPARSE=1
for optimum FMM performance.
Running FMM in standalone mode
You can also compile and run the GPU-FMM without GROMACS integration. The relevant code is in the ./src/gromacs/fmm/fmsolvr-gpu
subdirectory of the above tar archive after unpacking. Compile it with a script like this:
; in bash
export CC=$( which gcc )
export CXX=$( which g++ )cmake -H../git-gromacs-gmxbenchmarking/src/gromacs/fmm/fmsolvr-gpu -B. -DFMM_STANDALONE=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.0
make
The python script runfmm.py
can be used to benchmark the standalone version of the GPU-FMM.