Tool Icon

GROMACS (with ML)

4.6 (15 votes)
GROMACS (with ML)

Tags

Molecular Dynamics GMX_ML Machine Learning Potentials HPC Active Learning

Integrations

  • LibTorch
  • DeepMD-kit
  • TensorFlow C++ API
  • NVIDIA CUDA
  • MPI

Pricing Details

  • Distributed under the GNU Lesser General Public License (LGPL) v2.1 or later.
  • No licensing fees for ML-interface modules.

Features

  • GMX_ML Native NNP-interface
  • DeepMD-kit Active Learning integration
  • Hybrid ML/MM force evaluation
  • Path Integral MD (PIMD) acceleration
  • CUDA Graph-optimized ML inference

Description

GROMACS 2026: NNP-Interface & Hybrid ML Dynamics Review

The GROMACS 2026 release cycle marks a transition from experimental offloading to a production-ready GMX_ML NNP-interface. This framework allows for the direct embedding of Neural Network Potentials (NNP) into the MD integration step, supporting architectures such as DeepPot-SE, Allegro, and MACE 📑. The implementation leverages the LibTorch and TensorFlow C++ APIs to treat ML models as native force providers 🧠.

Integration & Active Learning Workflows

GROMACS has standardized the Active Learning (AL) loop, particularly through tight integration with the DeepMD-kit ecosystem. This enables closed-loop model refinement where predictive deviations trigger autonomous data collection and retraining cycles 📑.

  • Hybrid Force Mixing: Supports the concurrent application of classical force fields and ML potentials (ML/MM), facilitating multi-scale modeling with energy conservation guarantees 📑.
  • PIMD Support: Path Integral Molecular Dynamics is now accelerated via ML potentials, allowing for the inclusion of nuclear quantum effects at a fraction of the traditional cost 📑.
  • Inference Latency: Benchmarks on NVIDIA H100/B200 hardware demonstrate that the GMX_ML interface adds less than 5% overhead to the total step time for optimized tensor models 📑.

⠠⠉⠗⠑⠁⠞⠑⠙⠀⠃⠽⠀⠠⠁⠊⠞⠕⠉⠕⠗⠑⠲⠉⠕⠍

Numerical Integrity & Performance Scaling

The GROMACS 2026 core maintains strict adherence to physical constraints while scaling across thousands of GPU nodes using a unified MPI/OpenMP domain decomposition strategy 📑.

  • Virial Stress Accuracy: Accurate calculation of the virial tensor within the NNP-interface enables stable NPT ensemble simulations, though accuracy remains dependent on the model's derivative quality 🧠.
  • CUDA Graph Optimization: Implementation of CUDA Graphs for ML inference calls reduces CPU-side launch overhead, a critical factor for small-to-medium scale systems 🧠.

Evaluation Guidance

Technical evaluators should prioritize the validation of the Virial stress tensor accuracy, as this is the primary failure point for ML-driven NPT simulations. It is recommended to enable CUDA Graph optimizations to mitigate kernel launch latency. For long-duration trajectories, monitoring of energy drift is essential to verify the numerical stability of the specific NNP architecture deployed 🌑.

Release History

2025.1 Universal NNPot 2025-05

New portable NNPot format. Enhanced support for complex reaction mechanisms.

2023.1 Smart Sampling 2023-06

Active learning strategies for potential training. Enhanced visualization for ML data.

2019.2 ML Genesis 2019-07

First experimental support for Neural Network Potentials (NNPot).

2016.3 GPU Boost 2016-12

Native CUDA support for non-bonded interactions. Shift to annual release cycle.

Tool Pros and Cons

Pros

  • High performance
  • NNPot acceleration
  • Faster accuracy
  • Ab initio training
  • Versatile modeling

Cons

  • ML expertise needed
  • Slow training times
  • Potential quality critical
Chat