Best Bang for Your Buck!

Cost-efficient MD simulations with GROMACS

Atomic-detail simulations of large biomolecular systems can easily occupy a compute cluster for weeks or even months. Continuous efforts are being made to ensure that our computing power is used most efficiently. This includes network fine-tuning and code optimizations to reach the best possible parallel scaling.

More bang for your buck: Improved use of GPU nodes for GROMACS 2018

We identify hardware that is optimal to produce molecular dynamics trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes.

However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more towards the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift towards GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware.

GROMACS 2018 Developer / Power User Workshop
Please follow this link to the external workshop website for more information. more
GROMACS 2016 Developer Workshop
On May 19–20, a group of >50 GROMACS developers and users gathered at the Max Planck Institute for biophysical Chemistry in Göttingen to discuss various aspects of software development and future directions for GROMACS.
more

Past contributions that enhance the parallel scaling include:

  • Parallelization of the Essential Dynamics + Flooding module, making use of Gromacs 4 new domain decomposition features
  • A patch [GPL license] for Gromacs 3.3.1 optimizes the all-to-all communication for better PME performance on ethernet clusters
  • Multiple-Process, Multiple-Data PME: This type of PME treatment is available in Gromacs from version 4 on. PME efficiency is enhanced by splitting up a part of the processors for the calculation of the reciprocal part of the Ewald sum
  • For Gromacs 4.0.7 there is a GPL tool that finds the optimal performance with PME on a given number of processors [download g_tune_pme] (Unpack with tar -xvzf). From version 4.5 on, g_tune_pme is part of the official Gromacs package. There is also a poster describing g_tune_pme. [PDF]

Publications

Kutzner, C.; Kniep, C.; Cherian, A.; Nordstrom, L.; Grubmüller, H.; de Groot, B. L.; Gapsys, V.: GROMACS in the cloud: A global supercomputer to speed up alchemical drug design. Journal of Chemical Information and Modeling 62 (7), pp. 1691 - 1711 (2022)
Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, B. L.; Grubmüller, H.: More bang for your buck: Improved use of GPU nodes for GROMACS 2018. Journal of Computational Chemistry 40 (27), pp. 2418 - 2431 (2019)
Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, B.; Grubmüller, H.: Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. Journal of Computational Chemistry 36 (26), pp. 1990 - 2008 (2015)
Páll, S.; Abraham, M. J.; Kutzner, C.; Hess, B.; Lindahl, E.: Tackling exascale software challenges in molecular dynamics simulations with GROMACS. In: Solving Software Challenges for Exascale: International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden, April 2-3, 2014, Revised Selected Papers, pp. 3 - 27 (Eds. Markidis, S.; Laure, E.). Springer, Cham (2015)
Kutzner, C.; Apostolov, R.; Hess, B.; Grubmüller, H.: Scaling of the GROMACS 4.6 molecular dynamics code on SuperMUC. In: Parallel Computing: Accelerating Computational Science and Engineering (CSE), pp. 722 - 730 (Eds. Bader, M.; Bode, A.; Bungartz, H. J.). IOS Press, Amsterdam (2014)
Hess, B.; Kutzner, C.; van der Spoel, D.; Lindahl, E.: GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. Journal of Chemical Theory and Computation 4 (3), pp. 435 - 447 (2008)
Kutzner, C.; van der Spoel, D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.; de Groot, B. L.; Grubmueller, H.: Speeding up parallel GROMACS on high-latency networks. Journal of Computational Chemistry 28 (12), pp. 2075 - 2084 (2007)
Kutzner, C.; van der Spoel, D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.; de Groot, B. L.; Grubmueller, H.: Improved GROMACS scaling on ethernet switched clusters. In: Recent advances in parallel virtual machine and message passing interface. 13th European PVM/MPI User`s Group meeting, Bonn, Germany, September 17-20, 2006, pp. 404 - 405 (Eds. Mohr, B.; Larsson, T. J.; Worringen, J.; Dongarra, J.). Springer, Berlin (2006)
Go to Editor View