@mayra-cabrera The way I understand is that the name of the service account, $KUBE_SERVICE_ACCOUNT is needed by helm to pass to the pod where a remote Tiller is installed. The remote Tiller pod, usually called tiller-deploy-XXXX will then start with the permissions of that service account.
With a local tiller binary, it will just use the KUBECONFIG credentials which we already have locally instead.
NOTE: the KUBECONFIG on CI currently uses the all-powerful gitlab token, which we still should replace with a less-privileged service account's token.
@tkuah@mayra-cabrera if this works I'm inclined to things this way. It solves the problem that this tiller is missing mutual SSL too so I recommend we incorporate Thong's approach which seems to be something like:
@@ -251,6 +252,7 @@ GIT_STRATEGY: none script: - install_dependencies+ - install_tiller - delete environment: name: review/$CI_COMMIT_REF_NAME@@ -636,7 +638,12 @@ curl "https://kubernetes-helm.storage.googleapis.com/helm-v${HELM_VERSION}-linux-amd64.tar.gz" | tar zx mv linux-amd64/helm /usr/bin/+ mv linux-amd64/tiller /usr/bin/ helm version --client+ tiller -version++ helm init --client-only+ helm plugin install https://github.com/adamreese/helm-local curl -L -o /usr/bin/kubectl "https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl" chmod +x /usr/bin/kubectl@@ -749,8 +756,13 @@ function install_tiller() { echo "Checking Tiller..."- helm init --upgrade- kubectl rollout status -n "$TILLER_NAMESPACE" -w "deployment/tiller-deploy"+ #helm init --upgrade+ #kubectl rollout status -n "$TILLER_NAMESPACE" -w "deployment/tiller-deploy"++ helm local start+ helm local status+ export HELM_HOST=":44134"+ if ! helm version --debug; then echo "Failed to init Tiller." return 1