Kubernetes Executer using default service account, does not have cluster wide access
I'm running Gitlab Runner, version 11.5.1, in our kubernetes cluster (1.11.5). I installed it using the helm charts, version 0.1.42. My jobs are able to build our docker images, but when I attempt to deploymen them using helm in the gitlab runner, the pod is unable to access resources outside of the gitlab
namespace I installed the runner into, even though I set clusterWideAccess
to true. I could give cluster access to the default service account in the namespace, but I don't really want to as that is then controlled outside of helm, and I can't easily replicate it. Note that this is running in our dev environment, so gitlab runner having cluster access isn't an issue for us.
When the deploy process runs this is the output we get:
Running with gitlab-runner 11.5.1 (7f00c780)
on gitlab-runner-gitlab-runner-66df89f6f7-h7zsh MsnKU7Nr
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image navistone/helm-kubectl:2.11.0_1.10.3 ...
Waiting for pod gitlab/runner-msnku7nr-project-9423279-concurrent-05sxqn to be running, status is Pending
Running on runner-msnku7nr-project-9423279-concurrent-05sxqn via gitlab-runner-gitlab-runner-66df89f6f7-h7zsh...
Cloning repository...
Cloning into '/navistone/artemis/artemis'...
Checking out 7034f7f4 as develop...
Skipping Git submodules setup
$ echo ${kube_config_dev} | base64 -d > ${KUBECONFIG}
$ helm init --client-only
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
$ export NAME=${RELEASE_NAME}-dev
$ export VALUES=chart/values-dev.yaml
$ export TAG=develop
$ export NAMESPACE=${DEVELOPMENT_NAMESPACE}
$ export DEPLOYS=$(helm ls | grep ${NAME} | wc -l)
Error: pods is forbidden: User "system:serviceaccount:gitlab:default" cannot list pods in the namespace "kube-system"
$ if [ ${DEPLOYS} -eq 0 ]; then helm install --set image.tag=${TAG} --name=${NAME} -f ${VALUES} helm --namespace=${NAMESPACE}; else helm upgrade ${NAME} --set image.tag=${TAG} -f ${VALUES} helm --namespace=${NAMESPACE}; fi
Error: pods is forbidden: User "system:serviceaccount:gitlab:default" cannot list pods in the namespace "kube-system"
ERROR: Job failed: command terminated with exit code 1
Here are the values I used for Helm to deploy gitlab-runner:
image: gitlab/gitlab-runner:alpine-v11.5.1
imagePullPolicy: IfNotPresent
init:
image: busybox
tag: latest
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "xxxxxxxxxxxxxxx"
unregisterRunners: true
concurrent: 10
checkInterval: 30
rbac:
create: true
clusterWideAccess: true
metrics:
enabled: true
runners:
image: navistone/docker-awscli
locked: false
tags: "develop"
privileged: true
cache: {}
builds: {}
services: {}
helpers: {}
resources:
limits:
memory: 256Mi
cpu: 200m
requests:
memory: 128Mi
cpu: 100m
And here is our gitlab-ci.yml
for building and deploying:
image: navistone/docker-awscli
services:
- docker:dind
variables:
IMAGE_NAME: xxxxxx.dkr.ecr.us-east-1.amazonaws.com/artemis
IMAGE_NAME_DEV: xxxxxy.dkr.ecr.us-east-1.amazonaws.com/artemis
KUBECONFIG: /etc/kube_config
PRODUCTION_NAMESPACE: navistone
DEVELOPMENT_NAMESPACE: navistone-dev
DOCKER_HOST: tcp://localhost:2375
DOCKER_DRIVER: overlay2
RELEASE_NAME: artemis
SRC_DIR: src
DOCKER_FILE: Artemis.Api/Dockerfile
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: gitlab-runner-gitlab-runner
stages:
- build
- deploy
build-develop:
stage: build
tags:
- develop
script:
- cd $SRC_DIR
- export TAG=develop
- docker build -t $IMAGE_NAME_DEV:${TAG} -f ${DOCKER_FILE} .
- eval `aws ecr get-login --no-include-email --region us-east-1`
- docker push $IMAGE_NAME_DEV:${TAG}
only:
- develop
deploy-develop:
stage: deploy
image: navistone/helm-kubectl:2.11.0_1.10.3
tags:
- develop
before_script:
- echo ${kube_config_dev} | base64 -d > ${KUBECONFIG}
- helm init --client-only
script:
- export NAME=${RELEASE_NAME}-dev
- export VALUES=chart/values-dev.yaml
- export TAG=develop
- export NAMESPACE=${DEVELOPMENT_NAMESPACE}
- export DEPLOYS=$(helm ls | grep ${NAME} | wc -l)
- if [ ${DEPLOYS} -eq 0 ]; then helm install --set image.tag=${TAG} --name=${NAME} -f ${VALUES} helm --namespace=${NAMESPACE}; else helm upgrade ${NAME} --set image.tag=${TAG} -f ${VALUES} helm --namespace=${NAMESPACE}; fi
only:
- develop
I tried setting the variable KUBERNETES_SERVICE_ACCOUNT_OVERWRITE
to the service account created by helm, but it doesn't seem to make a difference.
Not sure if this will help, but here is the config.toml
from inside the gitlab runner pod:
listen_address = "[::]:9252"
concurrent = 10
check_interval = 30
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-gitlab-runner-66df89f6f7-h7zsh"
url = "https://gitlab.com/"
token = "xxxxxxxx"
executor = "kubernetes"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = "navistone/docker-awscli"
namespace = "gitlab"
namespace_overwrite_allowed = ""
privileged = true
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.volumes]