Kubernetes executor fails in group projects
Summary
The Kubernetes executor seems incapable of launching pods correctly when run in a project belonging to a group. All jobs fails when trying to start pods.
Moving the project to a user's namespace and starting the pipeline works flawlessly.
It's worth noting that the Docker executor does not have this problem.
Steps to reproduce
Set up Gitlab CI with the Kubernetes executor in a personal project, test that the pipeline successfully launches Kubernetes pods, move the project to a group's namespace and watch the pipeline fail, move it back to a user's namespace and it works again.
Example .gitlab-ci.yml:
stages:
- test
ansible check:
image: williamyeh/ansible:alpine3
stage: test
script:
- echo "Hello"
Actual behavior
The pods go straight from the initial pending state to failed state.
Expected behavior
Pods should successfully start.
Relevant logs and/or screenshots
Running with gitlab-ci-multi-runner 9.3.0 (3df822b)
on Kubernetes Runner (d5f1890a)
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image williamyeh/ansible:alpine3 ...
Waiting for pod gitlab/runner-d5f1890a-project-49-concurrent-0d2r0w to be running, status is Pending
ERROR: Job failed (system failure): pod status is failed
Environment description
Self-hosted Gitlab CE. Kubernetes and Kubernetes Executor.