Wrong cuda version in NVIDIA_REQUIRE_CUDA in cuda/10.0/base/Dockerfile
The last update changed the environment variable NVIDIA_REQUIRE_CUDA and now has the dependency of cuda>=10.1. I think that causes the following error.
This seems to affect currently only the 16.04 images.
The following test works:
docker run --rm -it --runtime=nvidia nvidia/cuda:10.0-base-ubuntu18.04 nvidia-smi
while the following fails:
docker run --rm -it --runtime=nvidia nvidia/cuda:10.0-base-ubuntu16.04 nvidia-smi
My host driver Version is 410.79.
With *-ubuntu16.04 I'm getting the following error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:385: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=31632 /var/lib/docker/overlay2/7a609a86e508b802040692163f273a112aceed339ebcfbdfddbc8fdd103bc854/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.