Kubernetes integration broke without external intervention for existing projects
Summary
I've several projects that have had GKE integration for at least 6 months and I also have a scheduled run every Sunday of the Staging environment. Yesterday, everything ran smoothly over all the projects. Today, it doesn't. Neither the Kubernetes environment nor the CI files have changed since then. Debugging, I
Steps to reproduce
- Create a new project
- Add an existing Kubernetes Cluster without RBAC integration
- Put the following script on
.gitlab-ci.yml
:
image: asimonf/docker-gitversion-compose:latest
variables:
KUBERNETES_VERSION: 1.10.3
HELM_VERSION: 2.10.0
staging:
stage: deploy
script:
- apk add -U openssl curl tar gzip bash ca-certificates git
- wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
- wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
- apk add glibc-2.28-r0.apk
- rm glibc-2.28-r0.apk
- curl "https://kubernetes-helm.storage.googleapis.com/helm-v${HELM_VERSION}-linux-amd64.tar.gz" | tar zx
- mv linux-amd64/helm /usr/bin/
- helm version --client
- curl -L -o /usr/bin/kubectl "https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
- chmod +x /usr/bin/kubectl
- kubectl version --client
- kubectl describe namespace
environment:
name: staging
url: http://kubernetes-test.fake-domain.com
only:
refs:
- master
kubernetes: active
The above should show namespaces instead of failing. Using the above CI script on an existing project that has had the integration working with the same Kubernetes Cluster details fails.
I tried editing the Kubernetes integration details, but that failed on existing projects (as in, the pipeline still fails even though the pipeline works on new projects).
Example Project
I created a project, but I'm not sure if leaving it public will reveal details of the Kubernetes environment.
What is the current bug behavior?
On existing projects that share the same Kubernetes cluster details as the above, the pipeline fails. On further inspection in pipeline runs, the env variable KUBE_TOKEN
is empty when I believe it shouldn't. This results in the kubectl command above asking for a user and password instead of describing namespaces.
What is the expected correct behavior?
The above pipeline should work as long as the Kubernetes integration is properly configured on previously configured projects.
Relevant logs and/or screenshots
Nothing relevant as of right now.
Output of checks
This happens in Gitlab.com.
Possible fixes
I tried forcing the token by setting the KUBE_TOKEN env variable through the CI/CD variables menu, but that didn't fix this. I'm not sure if the empty env variable is the culprit, but it does raise flags.