PME only for LJ and not coulomb does not work
Summary
Long-range only for LJ and not coulomb fails in multiple different ways.
GROMACS version
2021.6-dev 2022-rc2
Steps to reproduce
Run gmx mdrun
on any and any build config, both default command line or non-default (e.g. turning of the use of GPUs).
Input file: coul-RF_ljpme.tpr
What is the current bug behavior?
The run aborts with errors:
- double-precision CPU-only build:
$ /nethome/pszilard-projects/gromacs/gromacs-21/build_AVX512_gcc102_double/bin/gmx_d mdrun -quiet -v -s coul-RF_ljpme.tpr -nsteps 0 -ntmpi 24 -npme 1
Back Off! I just backed up md.log to ./#md.log.10#
Reading file coul-RF_ljpme.tpr, VERSION 2021.5 (single precision)
Overriding nsteps with value passed on the command line: 0 steps, 0 ps
Changing nstlist from 10 to 20, rlist from 0.966 to 1.069
Using 24 MPI threads
Using 1 OpenMP thread per tMPI thread
-------------------------------------------------------
Program: gmx mdrun, version 2021.6-dev-20220117-b3ef828c4b
Source file: src/gromacs/ewald/pme.cpp (line 892)
Function: gmx_pme_t* gmx_pme_init(const t_commrec*, const NumPmeDomains&, const t_inputrec*, gmx_bool, gmx_bool, gmx_bool, real, real, int, PmeRunMode, PmeGpu*, const DeviceContext*, const DeviceStream*, const PmeGpuProgram*, const gmx::MDLogger&)
MPI rank: 14 (out of 24)
Feature not implemented:
PME GPU does not support: PME decomposition; Lennard-Jones PME; double
precision; non-GPU build of GROMACS.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
- mixed prec GPU build with default arguments:
$gmx mdrun -quiet -ntmpi 1 -ntomp 32 -nsteps 10000 -s coul-RF_ljpme.tpr
Back Off! I just backed up md.log to ./#md.log.16#
Reading file coul-RF_ljpme.tpr, VERSION 2021.5 (single precision)
Note: file tpx version 122, software tpx version 127
Overriding nsteps with value passed on the command line: 10000 steps, 20 ps
Changing nstlist from 10 to 25, rlist from 0.966 to 1.119
1 GPU selected for this run.
Mapping of GPU IDs to the 1 GPU task in the 1 rank on this node:
PP:0
PP tasks will do (non-perturbed) short-ranged and most bonded interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 1 MPI thread
Using 32 OpenMP threads
NOTE: The number of threads is not equal to the number of (logical) cores
and the -pin option is set to auto: will not pin threads to cores.
This can lead to significant performance degradation.
Consider using -pin on (and -pinoffset in case you run multiple jobs).
-------------------------------------------------------
Program: gmx mdrun, version 2022-rc1-dev-20220127-ff39dbf588-dirty
Source file: src/gromacs/ewald/pme.cpp (line 991)
Function: gmx_pme_t* gmx_pme_init(const t_commrec*, const NumPmeDomains&, const t_inputrec*, const real (*)[3], real, gmx_bool, gmx_bool, gmx_bool, real, real, int, PmeRunMode, PmeGpu*, const DeviceContext*, const DeviceStream*, const PmeGpuProgram*, const gmx::MDLogger&)
Feature not implemented:
PME GPU does not support:
Lennard-Jones PME.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
What did you expect the correct behavior to be?
The runs should not exit with error. GPU runs should fall back to CPU-only mode assuming this is a supported config (otherwise it should fail with a relevant message).
Edited by Szilárd Páll