Job not finished with Kubernetes executor when a sub shell is still running
We recently switched over our runner, from a VM using the Docker executor to k8s. Our jobs and scripts were running fine on the Docker VM runner, but when switching to k8s we noticed that the jobs were no longer finishing.
Essentially the job script would complete, but then the job would never finish until it ultimately timed out and failed.
In debugging this, we noticed that this only happened on jobs that had a long running command running in a subshell using command &
. If we added a step to kill the command at the end of the script, everything worked as expected.
What is the reason for the difference in behaviors in executors? Should we standardize on a behavior? The docker VM version seems the most friendly.