gmx distance gives incorrect averages
Summary
Running the command "gmx distance" gives incorrect results. The pair distances at all timesteps and the average distances are the same for several different atom pairs. Gromacs version 2020.4 gives the correct results.
(Summarize the bug encountered concisely)
GROMACS version
:-) GROMACS - gmx, 2022.1 (-:
Executable: /usr/local/gromacs/bin/gmx Data prefix: /usr/local/gromacs Working dir: /home/peter/MALOAM/OPLSAA/BugReport Command line: gmx -quiet --version
GROMACS version: 2022.1 Precision: mixed Memory model: 64 bit MPI library: thread_mpi OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128) GPU support: CUDA SIMD instructions: AVX2_256 CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128 GPU FFT library: cuFFT RDTSCP usage: enabled TNG support: enabled Hwloc support: disabled Tracing support: disabled C compiler: /usr/bin/cc GNU 10.3.0 C compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG C++ compiler: /usr/bin/c++ GNU 10.3.0 C++ compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2021 NVIDIA Corporation;Built on Fri_Dec_17_18:16:03_PST_2021;Cuda compilation tools, release 11.6, V11.6.55;Build cuda_11.6.r11.6/compiler.30794723_0 CUDA compiler flags:-std=c++17;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-Wno-deprecated-gpu-targets;-gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_80,code=sm_80;-use_fast_math;-D_FORCE_INLINES;-mavx2 -mfma -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG CUDA driver: 11.60 CUDA runtime: 11.60
(Paste the output of gmx -quiet --version
here, and select the relevant version (year is enough) if you can find it in the drop-down label list. We support the current stable release (typically the same as the current year), and also fix bugs that have direct/critical relevance for scientific results in the very-stable-release (typically last year's release). If your bug appears in a version older than that, or a version of the code that you modified, please confirm you can reproduce it with a currently supported version of the code first.)
Steps to reproduce
Running the command: gmx distance -f "NAME"_nvt.trr -s "
NAME"_nvt.tpr -n "$NAME"_dist.ndx -oav -oall -oallstat -select -sf sf.dat &> avgdist.out &
using gromacs-2022.1 gives incorrect results as shown in the files: dist.xvg, diststat.xvg and avgdist.out Note that the program gives the same distances for different pairs of atoms. However, using gromacs-2020.4 gives the correct results as shown in the files: dist.xvg.2020.4, diststat.xvg.2020.4 and avgdist.out.2020.4 (Please describe how we can reproduce the bug, and share all files needed - ideally both the TPR file and the raw GRO/MDP/TOP files needed to regenerate it. Bugs that only appear after running for 3 hours on 200 GPUs unfortunately tend to not get a lot of attention. You will typically get much faster attention if you have been able to narrow it down to the smallest possible input, command line, system size, etc.)
What is the current bug behavior?
(What actually happens)
What did you expect the correct behavior to be?
(What you should see instead)
(Please include at least the top of the GROMACS log file, as well as the end if there is any info about a possible crash. This file contains a lot of information about the hardware, software versions, libraries and compilers that help us in debugging).
Possible fixes
(Any suggestions or thoughts are welcome. If you can, link to the line of code that might be responsible for the problem)