Can't use AutoDevOps with a deleted namespace

Using gitlab-ce=12.2.4-ce.0

I have a gitlab CE instance, Rancher built Kubernetes cluster and a lot of ripped out hair dealing with firewall issues, certificate issues, learning AutoDevOps, Helm, Herokuish, Rancher and vmware vSphere and so on from scratch.

I finally have a cluster my GitLab can integrate with successfully, can install runners on, can install prometheus on, and prometheus can even get a PVC from vMware. I got a dotnetcore project configured with AutoDevOps to build (with a dockerfile because I havn't quite figured out Herokuish build packs yet.) I bypassed "test"( because that needs Herokuish regardless.) I got it to "deploy" to staging, but it failed the first time as my project was listening on a port other than 5000, so a health check failed. And then helm refused to redeploy ontop of a failed deployment after I fixed the port.

So I just deleted the entire namespace from my cluster "dotnetcore-1-staging" expecting that to work.

But the deployment now fails, at "ensure_namespace" ironically. Surely, surely, wiping a namespace is the one tool we should be able to use to clean something failed up and start over?

Running with gitlab-runner 12.1.0 (de7731dd)
  on runner-gitlab-runner-6585f468f8-mrdch UZyG21Lj
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ...
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-uzyg21lj-project-1-concurrent-0bfzmr to be running, status is Pending
Running on runner-uzyg21lj-project-1-concurrent-0bfzmr via runner-gitlab-runner-6585f468f8-mrdch...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/chrisb/dotnetcore/.git/
Created fresh repository.
From http://gitlab.example.com/chrisb/dotnetcore
 * [new branch]      master     -> origin/master
Checking out 8f61eea1 as master...

Skipping Git submodules setup
$ auto-deploy check_kube_domain
+ export RELEASE_NAME=staging
+ RELEASE_NAME=staging
+ auto_database_url=postgres://user:testing-password@staging-postgres:5432/staging
+ export DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ export TILLER_NAMESPACE=dotnetcore-1-staging
+ TILLER_NAMESPACE=dotnetcore-1-staging
+ export HELM_HOST=localhost:44134
+ HELM_HOST=localhost:44134
+ option=check_kube_domain
+ case $option in
+ check_kube_domain
+ [[ -z k8.example.com ]]
+ true
+ export RELEASE_NAME=staging
+ RELEASE_NAME=staging
+ auto_database_url=postgres://user:testing-password@staging-postgres:5432/staging
+ export DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ export TILLER_NAMESPACE=dotnetcore-1-staging
+ TILLER_NAMESPACE=dotnetcore-1-staging
+ export HELM_HOST=localhost:44134
+ HELM_HOST=localhost:44134
+ option=download_chart
+ case $option in
+ download_chart
+ [[ ! -d chart ]]
+ auto_chart=gitlab/auto-deploy-app
++ basename gitlab/auto-deploy-app
+ auto_chart_name=auto-deploy-app
+ auto_chart_name=auto-deploy-app
+ auto_chart_name=auto-deploy-app
+ helm init --client-only
$ auto-deploy download_chart
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
+ helm repo add gitlab https://charts.gitlab.io
"gitlab" has been added to your repositories
+ [[ ! -d gitlab/auto-deploy-app ]]
+ helm fetch gitlab/auto-deploy-app --untar
+ '[' auto-deploy-app '!=' chart ']'
+ mv auto-deploy-app chart
+ helm dependency update chart/
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
	Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "gitlab" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading postgresql from repo https://kubernetes-charts.storage.googleapis.com/
Deleting outdated charts
+ helm dependency build chart/
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
	Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "gitlab" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading postgresql from repo https://kubernetes-charts.storage.googleapis.com/
Deleting outdated charts
$ auto-deploy ensure_namespace
+ export RELEASE_NAME=staging
+ RELEASE_NAME=staging
+ auto_database_url=postgres://user:testing-password@staging-postgres:5432/staging
+ export DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ DATABASE_URL=postgres://user:testing-password@staging-postgres:5432/staging
+ export TILLER_NAMESPACE=dotnetcore-1-staging
+ TILLER_NAMESPACE=dotnetcore-1-staging
+ export HELM_HOST=localhost:44134
+ HELM_HOST=localhost:44134
+ option=ensure_namespace
+ case $option in
+ ensure_namespace
+ kubectl get namespace dotnetcore-1-staging
error: the server doesn't have a resource type "namespace"
+ kubectl create namespace dotnetcore-1-staging
error: You must be logged in to the server (Unauthorized)
ERROR: Job failed: command terminated with exit code 1
Edited Sep 12, 2019 by Chris
Assignee Loading
Time tracking Loading