Segmentation fault during energy minimization in gromacs 2023.2
Summary
Segmentation fault encountered during energy minimization. GROMACS was built locally and passes all tests.
GROMACS version
:-) GROMACS - gmx, 2023.2 (-:
Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /home/gusten/Documents/cnfbuilder/MDConf/mdp/simulations/NA-CL-0.01/tempo-L8
Command line:
gmx -quiet --version
GROMACS version: 2023.2
Precision: mixed
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: disabled
SIMD instructions: AVX2_256
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128
GPU FFT library: none
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /usr/bin/cc GNU 13.2.1
C compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx2 -mfma -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /usr/bin/c++ GNU 13.2.1
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx2 -mfma -Wno-missing-field-initializers -Wno-cast-function-type-strict SHELL:-fopenmp -O3 -DNDEBUG
BLAS library: External - detected on the system
LAPACK library: External - detected on the system
Steps to reproduce
Running the attached tpr file
with gmx mdrun -s em0.tpr
results in a segmentation fault.
The tpr was generated with gmx grompp -f em0.mdp -c neu.gro -o em0.tpr -n groups.ndx -maxwarn 1
, using the files
and
in ../../../forcefield
.
What is the current bug behavior?
On my machine I get the output:
Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /home/gusten/Documents/cnfbuilder/MDConf/mdp/simulations/NA-CL-0.01/tempo-L8
Command line:
gmx mdrun -s em0.tpr
Reading file em1.tpr, VERSION 2023.2 (single precision)
Update groups can not be used for this system because there are three or more consecutively coupled constraints
NOTE: Periodic molecules are present in this system. Because of this, the domain decomposition algorithm cannot easily determine the minimum cell size that it requires for treating bonded interactions. Instead, domain decomposition will assume that half the non-bonded cut-off will be a suitable lower bound.
Using 80 MPI threads
Using 1 OpenMP thread per tMPI thread
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
There were 2 inconsistent shifts. Check your topology
Steepest Descents:
Tolerance (Fmax) = 1.00000e+01
Number of steps = 10000
Segmentation fault (core dumped)
What did you expect the correct behavior to be?
Not sure about the inconsistent shifts, but I certainly would not expect a segmentation fault. Sadly I don't have any idea of what could be causing it.
Edited by Gusten Isfeldt