AutoDevops - Broken cluster state when adding a private cluster that is inaccessible
Summary
If you add a cluster where the Master API is not accessible, it'll save and look like it worked but deployments will never work because these 3 ENV vars will never get added even after rectifying the connection:
- KUBE_TOKEN
- KUBE_NAMESPACE
- KUBECONFIG
Steps to reproduce
- Create a private cluster in GKE and whitelist your own personal IP address
- Add that cluster to Gitlab
- Try to install Helm (get error)
- Remove IP whitelist (so Gitlab can now connect) and install Helm
- Try to deploy any AutoDevops project and you'll get an error with kubectl that it can't connect to:
localhost:8080(because kubeconfig is empty)
What is the current bug behavior?
On deployment you get an error that kubectl cannot connect to a cluster on localhost:8080 (missing kubeconfig). These three ENV vars are missing:
- KUBE_TOKEN
- KUBE_NAMESPACE
- KUBECONFIG
What is the expected correct behavior?
Upon fixing the connection to your cluster, Gitlab should set/use those 3 env vars on deployments
Relevant logs and/or screenshots
$ ensure_namespace
The connection to the server localhost:8080 was refused - did you specify the right host or port?
ERROR: Job failed: exit code 1
Possible fixes
I think upon adding a cluster, it should check that it's able to connect to it first.