GitLab Chart issueshttps://gitlab.com/gitlab-org/charts/gitlab/-/issues2023-11-29T16:25:41Zhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/727How to use nginx-ingress on self hosting kubernetes2023-11-29T16:25:41ZPierre-Pascal LapenseeHow to use nginx-ingress on self hosting kubernetesKubernetes: v1.11.2 (Rancher v2.0.8)
Linux: Ubuntu 16.04
Docker: v1.11.2
Gitlab: 11.2.1
I used this for the install:
```
helm upgrade --install gitlab gitlab/gitlab --timeout 900 \
--set global.hosts.domain=domain.com \
--set ce...Kubernetes: v1.11.2 (Rancher v2.0.8)
Linux: Ubuntu 16.04
Docker: v1.11.2
Gitlab: 11.2.1
I used this for the install:
```
helm upgrade --install gitlab gitlab/gitlab --timeout 900 \
--set global.hosts.domain=domain.com \
--set certmanager-issuer.email=mail@domain.com \
--set gitlab.migrations.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-rails-ce \
--set gitlab.sidekiq.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce \
--set gitlab.unicorn.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-unicorn-ce \
--set gitlab.unicorn.workhorse.image=registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce \
--set gitlab.task-runner.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-task-runner-ce \
--namespace gitlab
```
No domain resolve to any service. When I use external nginx-ingress into deamonset deployment it's working great but I would like to use the one that gitlab provide.There's any option that I can use to make it working ?
Also, I dont figure out how to use ssh with gitlab because the node already use port 22 of ssh-server to communicate with others.0.1.4Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/646Cannot load existing uploads, artifacts after restoring2023-08-23T22:49:07ZMohit ChoudharyCannot load existing uploads, artifacts after restoringReproduction steps:
- Install new gitlab helm chart
- Restore the backup taken from previous omnibus installation which has local uploads (instead of object storage)
- Old uploads (avatars etc) & artifacts fail to download
- New uploads ...Reproduction steps:
- Install new gitlab helm chart
- Restore the backup taken from previous omnibus installation which has local uploads (instead of object storage)
- Old uploads (avatars etc) & artifacts fail to download
- New uploads (after restore) are shown correctly
Further more, I checked that, incase of old uploads it's loading the data from local (instead of object storage).
However, for new uploads, it loads correctly from object storage.
Is there any migration missing to change existing uploads to object storage?0.1.3https://gitlab.com/gitlab-org/charts/gitlab/-/issues/639Add installation guide for installing to Openshift2019-11-22T16:26:23ZMarin JankovskiAdd installation guide for installing to OpenshiftAs the title says, install using charts on Openshift and write the step-by-step docs.As the title says, install using charts on Openshift and write the step-by-step docs.0.1.10Balasankar 'Balu' CBalasankar 'Balu' Chttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/606Improve document for EKS deployment guide2019-11-01T00:26:59ZJB VasseurImprove document for EKS deployment guideThere is a guide for EKS deployment of GitLab chart, but it is quite generic and, to be honest, hard for kubernetes new comers.
* https://gitlab.com/charts/gitlab/blob/master/doc/cloud/eks.md
What I am trying to do here is to descr...There is a guide for EKS deployment of GitLab chart, but it is quite generic and, to be honest, hard for kubernetes new comers.
* https://gitlab.com/charts/gitlab/blob/master/doc/cloud/eks.md
What I am trying to do here is to describe a complete guide, step by step, that anyone could reproduce.
---
## Cluster build
### Create the cluster
Using [eksctl](https://qiita.com/jb-vasseur/items/6f7d5f8eef1913da211b) tool.
```shell-session
> eksctl create cluster \
--cluster-name $NAME \
--nodes 3 \
--nodes-min 3 \
--nodes-max 5 \
--node-type t2.medium \
--region us-west-2
2018-07-18T13:33:01+09:00 [ℹ] importing SSH public key "/Users/jb/.ssh/id_rsa.pub" as "EKS-bakeneco"
2018-07-18T13:33:02+09:00 [ℹ] creating EKS cluster "bakeneco" in "us-west-2" region
2018-07-18T13:33:02+09:00 [ℹ] creating VPC stack "EKS-bakeneco-VPC"
2018-07-18T13:33:02+09:00 [ℹ] creating ServiceRole stack "EKS-bakeneco-ServiceRole"
2018-07-18T13:33:22+09:00 [✔] created ServiceRole stack "EKS-bakeneco-ServiceRole"
2018-07-18T13:34:03+09:00 [✔] created VPC stack "EKS-bakeneco-VPC"
2018-07-18T13:34:03+09:00 [ℹ] creating control plane "bakeneco"
2018-07-18T13:46:05+09:00 [✔] created control plane "bakeneco"
2018-07-18T13:46:05+09:00 [ℹ] creating DefaultNodeGroup stack "EKS-bakeneco-DefaultNodeGroup"
2018-07-18T13:49:47+09:00 [✔] created DefaultNodeGroup stack "EKS-bakeneco-DefaultNodeGroup"
2018-07-18T13:49:47+09:00 [✔] all EKS cluster "bakeneco" resources has been created
2018-07-18T13:49:47+09:00 [ℹ] wrote "kubeconfig"
2018-07-18T13:49:48+09:00 [ℹ] the cluster has 0 nodes
2018-07-18T13:49:48+09:00 [ℹ] waiting for at least 3 nodes to become ready
2018-07-18T13:50:15+09:00 [ℹ] the cluster has 4 nodes
2018-07-18T13:50:15+09:00 [ℹ] node "ip-192-168-100-194.us-west-2.compute.internal" is ready
2018-07-18T13:50:15+09:00 [ℹ] node "ip-192-168-128-38.us-west-2.compute.internal" is ready
2018-07-18T13:50:15+09:00 [ℹ] node "ip-192-168-177-212.us-west-2.compute.internal" is ready
2018-07-18T13:50:15+09:00 [ℹ] node "ip-192-168-209-209.us-west-2.compute.internal" is ready
2018-07-18T13:50:17+09:00 [ℹ] all command should work, try '/usr/local/bin/kubectl --kubeconfig kubeconfig get nodes'
2018-07-18T13:50:17+09:00 [ℹ] EKS cluster "bakeneco" in "us-west-2" region is ready
```
### Connect to the cluster with kubectl
```shell-session
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-100-194.us-west-2.compute.internal Ready <none> 14m v1.10.3
ip-192-168-106-191.us-west-2.compute.internal Ready <none> 13m v1.10.3
ip-192-168-128-38.us-west-2.compute.internal Ready <none> 14m v1.10.3
ip-192-168-177-212.us-west-2.compute.internal Ready <none> 14m v1.10.3
ip-192-168-209-209.us-west-2.compute.internal Ready <none> 14m v1.10.3
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
> kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system kube-dns 1 1 1 1 24m
> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-d2v4m 1/1 Running 1 14m
kube-system aws-node-fbcr6 1/1 Running 0 13m
kube-system aws-node-ggnps 1/1 Running 1 14m
kube-system aws-node-s2mnr 1/1 Running 0 14m
kube-system aws-node-z274r 1/1 Running 0 14m
kube-system kube-dns-7cc87d595-kdkcv 3/3 Running 0 19m
kube-system kube-proxy-5m9gf 1/1 Running 0 13m
kube-system kube-proxy-8fh9v 1/1 Running 0 14m
kube-system kube-proxy-ggk7j 1/1 Running 0 14m
kube-system kube-proxy-qngg2 1/1 Running 0 14m
kube-system kube-proxy-rxhjn 1/1 Running 0 14m
```
### [Network prerequisites](https://docs.gitlab.com/ee/install/kubernetes/preparation/networking.html)
> Amazon EKS utilizes Elastic Load Balancers, which are addressed by DNS name and cannot be known ahead of time. Skip this section.
Really?? Okay...
### [Persistent storage](https://gitlab.com/charts/gitlab/blob/master/doc/installation/storage.md)
EKS requires persistent volume creation ahead... Using dynamic provisioning of volumes solution, that locks our storage to a specific zone. Anyway, I did not understand how to use manual provisioning, so going for the dynamic one.
```yaml:gp2-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-west-2a
reclaimPolicy: Retain
mountOptions:
- debug
```
Making this storage default, just to be sure.
```shell-session
> kubectl create -f gp2-storage-class.yaml
storageclass.storage.k8s.io "gp2" created
> kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io "gp2" patched
> kubectl get storageclass
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 37s
```
### [Tiller](https://docs.gitlab.com/ee/install/kubernetes/preparation/tiller.html)
> Some clusters require authentication to use kubectl to create the Tiller roles.
> For clusters like Amazon EKS, you can directly upload the RBAC configuration.
```yaml:rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
```
```shell-session
jb@JBs-MacBook-Pro ~/D/g/bakeneco> kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding.rbac.authorization.k8s.io "tiller" created
```
Then installer and initialize Tiller.
```shell-session
jb@JBs-MacBook-Pro ~/D/g/bakeneco> helm init --service-account tiller
$HELM_HOME has been configured at /Users/jb/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
```
Helm Tiller requires a flag to be enabled to work properly on EKS:
```shell-session
> kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'
deployment.extensions "tiller-deploy" not patched
```
hmm??? `not patched` ?? ok, let's pretend that [this is a bug](https://github.com/kubernetes/kubernetes/issues/42494), and that patch worked.
## GitLab Helm Chart install
Update dependencies, make sure Tiller is okay with us.
```shell-session
> helm dependencies update
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "gitlab" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 4 charts
Downloading cert-manager from repo https://kubernetes-charts.storage.googleapis.com/
Downloading prometheus from repo https://kubernetes-charts.storage.googleapis.com/
Downloading postgresql from repo https://kubernetes-charts.storage.googleapis.com/
Downloading gitlab-runner from repo https://charts.gitlab.io/
Deleting outdated charts
```
Tiller was not okay with us, so we most likely need to do the trick for Tiller to work with RBAC.
```yaml:cluster-admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: cluster-admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
```
I am feeling like doing dirty things now.
```shell-session
> kubectl --namespace kube-system apply -f cluster-admin-role.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io "cluster-admin" configured
> helm init --upgrade --service-account tiller
$HELM_HOME has been configured at /Users/jb/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
```
Then hit the chart install command and pray...
```shell-session
> helm upgrade --install gitlab gitlab/gitlab \
--timeout 600 \
--set global.hosts.domain=bakeneco.io \
--set gitlab.migrations.initialRootPassword="XXX" \
--set certmanager-issuer.email=XX@YYY.com
Release "gitlab" does not exist. Installing it now.
NAME: gitlab
LAST DEPLOYED: Wed Jul 18 17:50:08 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ServiceAccount
NAME SECRETS AGE
gitlab-certmanager-issuer 1 11s
certmanager-gitlab 1 11s
gitlab-gitlab-runner 1 11s
gitlab-nginx-ingress 1 11s
gitlab-prometheus-alertmanager 1 11s
gitlab-prometheus-kube-state-metrics 1 11s
gitlab-prometheus-node-exporter 1 11s
gitlab-prometheus-server 1 11s
==> v1/RoleBinding
NAME AGE
gitlab-certmanager-issuer 3s
gitlab-nginx-ingress 3s
==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
gitlab-unicorn gitlab.bakeneco.io 80, 443 2s
gitlab-minio minio.bakeneco.io 80, 443 2s
gitlab-registry registry.bakeneco.io 80, 443 2s
==> v2beta1/HorizontalPodAutoscaler
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
gitlab-gitlab-shell Deployment/gitlab-gitlab-shell <unknown>/75% 2 10 0 2s
gitlab-sidekiq-all-in-1 Deployment/gitlab-sidekiq-all-in-1 <unknown>/75% 1 10 0 2s
gitlab-unicorn Deployment/gitlab-unicorn <unknown>/75% 2 10 0 2s
gitlab-registry Deployment/gitlab-registry <unknown>/75% 2 10 0 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
certmanager-gitlab-54467869c4-c7q6t 0/2 ContainerCreating 0 2s
gitlab-gitlab-runner-845c5b46d8-9vwjm 0/1 Init:0/1 0 2s
gitlab-gitlab-shell-7d8cd44948-lccrx 0/1 Init:0/1 0 2s
gitlab-sidekiq-all-in-1-58c996c9fb-mrcd5 0/1 Init:0/2 0 2s
gitlab-task-runner-866bc87864-tgg4p 0/1 Init:0/1 0 2s
gitlab-unicorn-5c69b9487c-v4pnq 0/1 Init:0/2 0 2s
gitlab-minio-99bff897b-6cplp 0/1 Pending 0 2s
gitlab-nginx-ingress-controller-65d58cbf4d-5zxlm 0/1 ContainerCreating 0 2s
gitlab-nginx-ingress-controller-65d58cbf4d-kp9fg 0/1 ContainerCreating 0 2s
gitlab-nginx-ingress-controller-65d58cbf4d-xttvr 0/1 ContainerCreating 0 2s
gitlab-nginx-ingress-default-backend-699b9476dd-5kjjt 0/1 ContainerCreating 0 1s
gitlab-nginx-ingress-default-backend-699b9476dd-jx86w 0/1 Pending 0 1s
gitlab-postgresql-5578b89f58-4jp9k 0/2 ContainerCreating 0 1s
gitlab-prometheus-server-847c8bb76-9jqm6 0/2 Pending 0 1s
gitlab-redis-6b8b6dbfd9-bs24z 0/2 Init:0/1 0 1s
gitlab-registry-7f4b9ccfc8-k4zsj 0/1 Pending 0 1s
gitlab-gitaly-0 0/1 Pending 0 2s
gitlab-issuer.1-2xqhq 0/1 ContainerCreating 0 2s
gitlab-migrations.1-zs7tj 0/1 Init:0/1 0 2s
gitlab-minio-create-buckets.1-97wvg 0/1 ContainerCreating 0 2s
==> v1beta1/CustomResourceDefinition
NAME AGE
certificates.certmanager.k8s.io 7s
clusterissuers.certmanager.k8s.io 3s
issuers.certmanager.k8s.io 3s
==> v1beta1/ClusterRole
certmanager-gitlab 3s
gitlab-prometheus-kube-state-metrics 3s
gitlab-prometheus-server 3s
==> v1beta1/ClusterRoleBinding
NAME AGE
certmanager-gitlab 3s
gitlab-prometheus-alertmanager 3s
gitlab-prometheus-kube-state-metrics 3s
gitlab-prometheus-node-exporter 3s
gitlab-prometheus-server 3s
==> v1/Role
NAME AGE
gitlab-certmanager-issuer 3s
gitlab-nginx-ingress 3s
==> v1beta2/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
gitlab-gitlab-shell 1 1 1 0 2s
gitlab-sidekiq-all-in-1 1 1 1 0 2s
gitlab-task-runner 1 1 1 0 2s
gitlab-unicorn 1 1 1 0 2s
gitlab-minio 1 1 1 0 2s
gitlab-nginx-ingress-controller 3 0 0 0 2s
gitlab-nginx-ingress-default-backend 2 0 0 0 2s
gitlab-redis 1 0 0 0 2s
gitlab-registry 1 0 0 0 2s
==> v1beta2/StatefulSet
NAME DESIRED CURRENT AGE
gitlab-gitaly 1 1 2s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
gitlab-gitaly N/A 1 0 2s
gitlab-gitlab-shell N/A 1 0 2s
gitlab-sidekiq N/A 1 0 2s
gitlab-unicorn N/A 1 0 2s
gitlab-minio-v1 N/A 1 0 2s
gitlab-nginx-ingress-controller 2 N/A 0 2s
gitlab-nginx-ingress-default-backend 1 N/A 0 2s
gitlab-redis-v1 N/A 1 0 2s
gitlab-registry-v1 N/A 1 0 2s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitlab-gitaly ClusterIP None <none> 8075/TCP,9236/TCP 3s
gitlab-gitlab-shell ClusterIP 10.100.96.36 <none> 22/TCP 3s
gitlab-unicorn ClusterIP 10.100.170.239 <none> 8080/TCP,8181/TCP 3s
gitlab-minio-svc ClusterIP 10.100.14.232 <none> 9000/TCP 3s
gitlab-nginx-ingress-controller LoadBalancer 10.100.234.221 <pending> 80:30364/TCP,443:30849/TCP,22:31508/TCP 3s
gitlab-nginx-ingress-default-backend ClusterIP 10.100.243.113 <none> 80/TCP 3s
gitlab-postgresql ClusterIP 10.100.10.190 <none> 5432/TCP 3s
gitlab-prometheus-server ClusterIP 10.100.191.234 <none> 80/TCP 3s
gitlab-redis ClusterIP 10.100.110.213 <none> 6379/TCP,9121/TCP 3s
gitlab-registry ClusterIP 10.100.51.212 <none> 5000/TCP 2s
==> v1/Job
NAME DESIRED SUCCESSFUL AGE
gitlab-issuer.1 1 0 2s
gitlab-migrations.1 1 0 2s
gitlab-minio-create-buckets.1 1 0 2s
==> v1/ConfigMap
NAME DATA AGE
gitlab-certmanager-issuer-certmanager 2 11s
gitlab-gitlab-runner 3 11s
gitlab-gitaly 3 11s
gitlab-gitlab-shell 2 11s
gitlab-nginx-ingress-tcp 1 11s
gitlab-migrations 4 11s
gitlab-sidekiq-all-in-1 1 11s
gitlab-sidekiq 6 11s
gitlab-task-runner 4 11s
gitlab-unicorn 8 11s
gitlab-unicorn-tests 1 11s
gitlab-minio-config-cm 3 11s
gitlab-nginx-ingress-controller 7 11s
gitlab-postgresql 0 11s
gitlab-prometheus-server 3 11s
gitlab-redis 2 11s
gitlab-registry 2 11s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab-minio Pending gp2 11s
gitlab-postgresql Bound pvc-9b3e012b-8a67-11e8-8b92-02b5389a29ae 8Gi RWO gp2 11s
gitlab-prometheus-server Bound pvc-9b3f2af5-8a67-11e8-8b92-02b5389a29ae 8Gi RWO gp2 11s
gitlab-redis Bound pvc-9b400dee-8a67-11e8-8b92-02b5389a29ae 5Gi RWO gp2 11s
==> v1beta1/Role
NAME AGE
gitlab-gitlab-runner 3s
==> v1beta1/RoleBinding
NAME AGE
gitlab-gitlab-runner 3s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
certmanager-gitlab 1 1 1 0 2s
gitlab-gitlab-runner 1 1 1 0 2s
gitlab-postgresql 1 1 1 0 2s
gitlab-prometheus-server 1 1 1 0 2s
```
Confirming later how it went:
```shell-session
> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default certmanager-gitlab-54467869c4-c7q6t 2/2 Running 0 16m
default cm-gitlab-gitlab-tls-yyrgl 1/1 Running 0 45s
default cm-gitlab-minio-tls-gucwn 1/1 Running 0 45s
default cm-gitlab-registry-tls-epyat 1/1 Running 0 46s
default gitlab-gitaly-0 1/1 Running 0 16m
default gitlab-gitlab-runner-845c5b46d8-9vwjm 0/1 Error 8 16m
default gitlab-gitlab-shell-7d8cd44948-fxxls 1/1 Running 0 15m
default gitlab-gitlab-shell-7d8cd44948-lccrx 1/1 Running 0 16m
default gitlab-issuer.1-2xqhq 0/1 Completed 0 16m
default gitlab-migrations.1-zs7tj 0/1 Completed 0 16m
default gitlab-minio-99bff897b-6cplp 1/1 Running 0 16m
default gitlab-minio-create-buckets.1-97wvg 0/1 Completed 0 16m
default gitlab-nginx-ingress-controller-65d58cbf4d-5zxlm 1/1 Running 0 16m
default gitlab-nginx-ingress-controller-65d58cbf4d-kp9fg 1/1 Running 0 16m
default gitlab-nginx-ingress-controller-65d58cbf4d-xttvr 1/1 Running 0 16m
default gitlab-nginx-ingress-default-backend-699b9476dd-5kjjt 1/1 Running 0 16m
default gitlab-nginx-ingress-default-backend-699b9476dd-jx86w 1/1 Running 0 16m
default gitlab-postgresql-5578b89f58-4jp9k 2/2 Running 0 16m
default gitlab-prometheus-server-847c8bb76-9jqm6 2/2 Running 0 16m
default gitlab-redis-6b8b6dbfd9-bs24z 2/2 Running 0 16m
default gitlab-registry-7f4b9ccfc8-k4zsj 1/1 Running 0 16m
default gitlab-registry-7f4b9ccfc8-kf6sm 1/1 Running 0 15m
default gitlab-sidekiq-all-in-1-58c996c9fb-mrcd5 1/1 Running 0 16m
default gitlab-task-runner-866bc87864-tgg4p 1/1 Running 0 16m
default gitlab-unicorn-5c69b9487c-j5klw 1/1 Running 0 15m
default gitlab-unicorn-5c69b9487c-v4pnq 1/1 Running 0 16m
kube-system aws-node-d2v4m 1/1 Running 1 4h
kube-system aws-node-fbcr6 1/1 Running 0 4h
kube-system aws-node-ggnps 1/1 Running 1 4h
kube-system aws-node-s2mnr 1/1 Running 0 4h
kube-system aws-node-z274r 1/1 Running 0 4h
kube-system kube-dns-7cc87d595-kdkcv 3/3 Running 0 4h
kube-system kube-proxy-5m9gf 1/1 Running 0 4h
kube-system kube-proxy-8fh9v 1/1 Running 0 4h
kube-system kube-proxy-ggk7j 1/1 Running 0 4h
kube-system kube-proxy-qngg2 1/1 Running 0 4h
kube-system kube-proxy-rxhjn 1/1 Running 0 4h
kube-system tiller-deploy-f5597467b-b2c5c 1/1 Running 0 3h
```
So, GitLab Runner did not make it.
```
> kubectl logs gitlab-gitlab-runner-845c5b46d8-9vwjm
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
ERROR: Registering runner... failed runner=3clGLmm7 status=couldn't execute POST against https://gitlab.bakeneco.io/api/v4/runners: Post https://gitlab.bakeneco.io/api/v4/runners: dial tcp: lookup gitlab.bakeneco.io on 10.100.0.10:53: no such host
PANIC: Failed to register this runner. Perhaps you are having network problems
```
Ok, so I would need to set the DNS record for it to succeed.
```shell-session
> kubectl describe service gitlab-nginx-ingress-controller | grep Ingress
LoadBalancer Ingress: aa03b57e68a6711e88b9202b5389a29a-974622501.us-west-2.elb.amazonaws.com
```
Then, going to my favorite Route53 config, and set the record name.
Wait for some time, and now we get GitLab Runner up and running :)
```
> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default certmanager-gitlab-54467869c4-c7q6t 2/2 Running 0 44m
default gitlab-gitaly-0 1/1 Running 0 44m
default gitlab-gitlab-runner-845c5b46d8-9vwjm 1/1 Running 13 44m
default gitlab-gitlab-shell-7d8cd44948-fxxls 1/1 Running 0 44m
default gitlab-gitlab-shell-7d8cd44948-lccrx 1/1 Running 0 44m
default gitlab-issuer.1-2xqhq 0/1 Completed 0 44m
default gitlab-migrations.1-zs7tj 0/1 Completed 0 44m
default gitlab-minio-99bff897b-6cplp 1/1 Running 0 44m
default gitlab-minio-create-buckets.1-97wvg 0/1 Completed 0 44m
default gitlab-nginx-ingress-controller-65d58cbf4d-5zxlm 1/1 Running 0 44m
default gitlab-nginx-ingress-controller-65d58cbf4d-kp9fg 1/1 Running 0 44m
default gitlab-nginx-ingress-controller-65d58cbf4d-xttvr 1/1 Running 0 44m
default gitlab-nginx-ingress-default-backend-699b9476dd-5kjjt 1/1 Running 0 44m
default gitlab-nginx-ingress-default-backend-699b9476dd-jx86w 1/1 Running 0 44m
default gitlab-postgresql-5578b89f58-4jp9k 2/2 Running 0 44m
default gitlab-prometheus-server-847c8bb76-9jqm6 2/2 Running 0 44m
default gitlab-redis-6b8b6dbfd9-bs24z 2/2 Running 0 44m
default gitlab-registry-7f4b9ccfc8-k4zsj 1/1 Running 0 44m
default gitlab-registry-7f4b9ccfc8-kf6sm 1/1 Running 0 44m
default gitlab-sidekiq-all-in-1-58c996c9fb-mrcd5 1/1 Running 0 44m
default gitlab-task-runner-866bc87864-tgg4p 1/1 Running 0 44m
default gitlab-unicorn-5c69b9487c-j5klw 1/1 Running 0 44m
default gitlab-unicorn-5c69b9487c-v4pnq 1/1 Running 0 44m
kube-system aws-node-d2v4m 1/1 Running 1 4h
kube-system aws-node-fbcr6 1/1 Running 0 4h
kube-system aws-node-ggnps 1/1 Running 1 4h
kube-system aws-node-s2mnr 1/1 Running 0 4h
kube-system aws-node-z274r 1/1 Running 0 4h
kube-system kube-dns-7cc87d595-kdkcv 3/3 Running 0 4h
kube-system kube-proxy-5m9gf 1/1 Running 0 4h
kube-system kube-proxy-8fh9v 1/1 Running 0 4h
kube-system kube-proxy-ggk7j 1/1 Running 0 4h
kube-system kube-proxy-qngg2 1/1 Running 0 4h
kube-system kube-proxy-rxhjn 1/1 Running 0 4h
kube-system tiller-deploy-f5597467b-b2c5c 1/1 Running 0 4h
```0.1.7Robert MarshallRobert Marshallhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/515Improve building and caching process for CNG images2019-09-03T12:04:53ZMarin JankovskiImprove building and caching process for CNG imagesDuring https://gitlab.com/charts/gitlab/issues/490 I've noticed that we have a lot of duplication in images as well as added layers that can be easily cleaned up.
We can also remove a lot of wait time during image builds if we have more...During https://gitlab.com/charts/gitlab/issues/490 I've noticed that we have a lot of duplication in images as well as added layers that can be easily cleaned up.
We can also remove a lot of wait time during image builds if we have more base images that are changing less frequently than the GitLab components.
For example, we should have a `gitlab-go` base image on top of `gitlab-ruby` that can be then reused for images that require go. Similar for `git`.
We also can go step further with dividing runtime from build time dependencies, there is still a number of packages that could be cleaned up.
cc @WarheadsSE @twk3 @Ahmadposten0.1.9Ian BaumIan Baumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/238Zero-downtime upgrade procedure2019-09-03T11:57:45ZDJ Mountneydj@gitlab.comZero-downtime upgrade procedureWe are missing documentation on how to update.
But we also are missing deployment features for rolling out upgrades without downtime.
This will need some investigation into what we should be setting in the kubernetes deploys.
And we w...We are missing documentation on how to update.
But we also are missing deployment features for rolling out upgrades without downtime.
This will need some investigation into what we should be setting in the kubernetes deploys.
And we will likely have to re-introduce the separation between pre-migrations and post migrations.0.1.6DJ Mountneydj@gitlab.comAhmad HassanDJ Mountneydj@gitlab.comhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/735Refresh GKE Marketplace Images/Chart2019-09-03T11:37:08ZJoshua LambertRefresh GKE Marketplace Images/ChartCurrently our GKE Marketplace offering is based off a fork of the Helm chart. I believe we have incorporated all of the changes, outside of a few values.yaml settings. (For example, to reduce the replica counts to 1)
We should refresh t...Currently our GKE Marketplace offering is based off a fork of the Helm chart. I believe we have incorporated all of the changes, outside of a few values.yaml settings. (For example, to reduce the replica counts to 1)
We should refresh the GKE marketplace chart with the current version, as well as update the charts to be the now current version.
# Decisions
## Track / Version ID
We discussed how the track/version id will be used in https://gitlab.com/charts/gitlab/issues/913. The end decision is that releases will be tagged `Major.Minor`.
Thus, in the current iteration; Track 11 is the latest of 11.0, Track 11.9 is the latest of any 11.9.x release.
# Challenge Summary [ ADDED: 2019-02-26 ]
## Use a Wrapper Chart or Fork Main Cloud Native GitLab Chart
We opted to use a wrapper chart and that decision is documented in charts/gitlab#914.
TL;DR - a wrapper chart allows us to set configuration values in a Marketplace specific values.yaml without having to port extra changes into the upstream Cloud Native GitLab chart.
## Upstream NGINX Issue
A check for semantic version in the upstream nginx chart broke the deployment to marketplace. We fixed in charts/gitlab#910.
## Marketplace Changed Base Deployer Image, No Documentation
There is now a deployer image with tiller; it wasn't in the documentation. This was discovered/documented along with a new field required APPLICATION_UID in charts/gitlab#1042.
charts/gitlab#1042
## Schema Automation
The `schema.yaml` required to deploy on the Marketplace also has to track all container tags/versions deployed by the Cloud Native GitLab chart. Issue #912 covers our automation of that process.
## Role Based Access Control
The Google Marketplace deployer changed how it handles access control. We cannot create ServiceAccounts from the charts; they must be pre-populated into the `schema.yaml` before chart deployment.
Issue #1040 tracks the automation work exporting ServiceAccount resources from our existing charts into the `schema.yaml` required by the Google Marketplace.
## Tooling
Repetitive work slowed down testing, so #1041 tracks the work to create the basic tooling allowing fast creation of new deployer images and testing/teardown of GKE clusters.
Many comments on its related merge request charts/deploy-image-helm-base!48 relate to discoveries made while using the tooling to test the releases.
## Deployer Validation Doesn't Understand Resources Aren't Used
The validation tool that gates deployment to the public Google Marketplace doesn't know that ClusterRole resources are not being used even if the Application Cluster Resource Definition says they are allowable. #1176 documents work to eliminate that problem.
## Application Custom Resource Definition Outdated
The validation gateway script from Google attempts to install the application and then tear it down. It continually failed in teardown because we were using the original version of the Application Custom Resource Definition which links components using the `apiVersion`. Updated to the modern version resolving this issue in https://gitlab.com/charts/gitlab/merge_requests/740
## Additional Feature Request: External LoadBalancer
Asked to support adding an optional flag to add an external loadbalancer.11.10Joshua LambertRobert MarshallJoshua Lamberthttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/717Implement PodSecurityPolicy support2019-09-03T11:31:10ZBart VerwilstImplement PodSecurityPolicy supportCurrently the Gitlab Helm chart has no notion of PSP support. Trying to Helm install the chart on a PSP-enabled cluster fails with
```
NAME READY STATUS RESTARTS AGE
pod/gi...Currently the Gitlab Helm chart has no notion of PSP support. Trying to Helm install the chart on a PSP-enabled cluster fails with
```
NAME READY STATUS RESTARTS AGE
pod/gitlab-shared-secrets.1-7yg-7dmwm 0/1 CreateContainerConfigError 0 8m
```
and
```
Error: container has runAsNonRoot and image will run as root
```
Creating a PSP ( or in this case maybe don't run the image as root ) will allow Gitlab to be installed on the more secure clusters out there. ;-)0.1.6Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/744EE CNG images have been replaces with CE versions2019-09-03T11:31:04ZDJ Mountneydj@gitlab.comEE CNG images have been replaces with CE versionssomeone accidentally pushed the CE tags to EE just now in upstream gitlab. Unfortunately, because we drop the `-ee` part of the tag for our CNG images (https://gitlab.com/gitlab-org/build/CNG), and we weren't defensive to this.... all ou...someone accidentally pushed the CE tags to EE just now in upstream gitlab. Unfortunately, because we drop the `-ee` part of the tag for our CNG images (https://gitlab.com/gitlab-org/build/CNG), and we weren't defensive to this.... all our ee images have now been overwritten with their CE counterparts.0.1.3DJ Mountneydj@gitlab.comRobert MarshallDJ Mountneydj@gitlab.comhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/772Initial tag of 1.1.0 is wrong2019-09-03T11:31:03ZDJ Mountneydj@gitlab.comInitial tag of 1.1.0 is wrong## Summary
The tag that was automatically created for 1.1.0 was created against master instead of the 1-1-stable branch.
This is likely a bug introduced with the additional changelog item handling I added to the release tools this past...## Summary
The tag that was automatically created for 1.1.0 was created against master instead of the 1-1-stable branch.
This is likely a bug introduced with the additional changelog item handling I added to the release tools this past release.
The 1-1-stable branch looks correct, just the tag was created against the wrong branch.
I will delete the 1.1.0 tag and manually create it against the correct branch for now, and push a proper fix upstream to the release tools.0.1.3DJ Mountneydj@gitlab.comDJ Mountneydj@gitlab.comhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/753Moving shared-secrets creation to the gitlab-operator2019-09-03T11:14:08ZDJ Mountneydj@gitlab.comMoving shared-secrets creation to the gitlab-operator## Proposed next step for the GitLab Operator
@Ahmadposten , @marin and @twk3 had a meeting on this earlier today, and believe we should be look to moving the shared-secrets job into the operator as the next feature. (once the current o...## Proposed next step for the GitLab Operator
@Ahmadposten , @marin and @twk3 had a meeting on this earlier today, and believe we should be look to moving the shared-secrets job into the operator as the next feature. (once the current operator MVP is stable enough to ship)
- Shared secrets currently happens in a pre-hook, and moving it to the operator can help us reduce our helm pre-hooks to just the operator
- We 'hack' the shared-secret job currently for the operator so when the operator is enabled, helm creates the shares secrets job, but with no parallelism (meaning it won't run). Meaning the operator is already completely controlling when this job is run. Making it a good candidate to be completely owned by the operator.0.1.6Ahmad HassanAhmad Hassanhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/724Make it easier to switch to CE version of our official images2019-09-03T11:00:08ZDJ Mountneydj@gitlab.comMake it easier to switch to CE version of our official imagesCurrently in order to use the CE images you need to swap several repositories, documented here: https://gitlab.com/charts/gitlab/blob/3d4ad75e4cc0cd43f6142640a17aecf87dc3a41b/doc/installation/deployment.md#deploy-the-community-edition
T...Currently in order to use the CE images you need to swap several repositories, documented here: https://gitlab.com/charts/gitlab/blob/3d4ad75e4cc0cd43f6142640a17aecf87dc3a41b/doc/installation/deployment.md#deploy-the-community-edition
This causes issues when we update the list, like when we split workhorse out, because user's end up with a mix of EE and CE images until they find and update the EE ones. During this past release we saw several responses from the community regarding this issue (https://gitlab.com/charts/gitlab/issues/715, https://gitlab.com/charts/gitlab/issues/711, https://gitlab.com/charts/gitlab/issues/694)
It would be a nicer user experience if we made the chart aware of both sets of images, CE, and EE, and allowed the user to swap to CE with just a single setting.0.1.3https://gitlab.com/gitlab-org/charts/gitlab/-/issues/615Deploying gitlab with an external load balance fail in gitlab/templates/NOTES...2019-09-03T10:42:41ZCong NguyenDeploying gitlab with an external load balance fail in gitlab/templates/NOTES.txtFrom https://gitlab.com/charts/gitlab/commit/b1d475fb76f0a5595848777e6f0b44432fe46773, it requires gitlab to configure with automatic TLS or a self-signed certificate.
We run gitlab without TLS and have an external load balance setup w...From https://gitlab.com/charts/gitlab/commit/b1d475fb76f0a5595848777e6f0b44432fe46773, it requires gitlab to configure with automatic TLS or a self-signed certificate.
We run gitlab without TLS and have an external load balance setup with let's encrypt. Gitlab runner still runs fine with this setup, but deploying gitlab is now fail
```
Error: UPGRADE FAILED: render error in "gitlab/templates/NOTES.txt": template: gitlab/templates/NOTES.txt:6:4: executing "gitlab/templates/NOTES.txt" at <fail "Automatic TLS ...>: error calling fail: Automatic TLS certificate generation with cert-manager is disabled and no TLS certificates were provided. Self-signed certificates would be generated that do not work with gitlab-runner. Please either disable gitlab-runner by setting `gitlab-runner.install=false` or provide valid certificates.
```0.1.3Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/682Install GitLab with existing cert-manager and ingress-shim2019-09-03T10:41:58ZMichaelInstall GitLab with existing cert-manager and ingress-shimI want to install GitLab using an existing setup of Rancher 2 with existing `cert-manager` using [ingress-shim](https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html), so according to the [docs](https://github.com/ran...I want to install GitLab using an existing setup of Rancher 2 with existing `cert-manager` using [ingress-shim](https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html), so according to the [docs](https://github.com/rancher/charts/blob/master/charts/cert-manager/v0.3.2/app-readme.md):
> Cert-manager will create Certificate resources that reference the `ClusterIssuer` for all Ingresses that have a `kubernetes.io/tls-acme: "true"` annotation.
(The annotation is actually kind of deprecated because the current version recommends using `certmanager.k8s.io/cluster-issuer`, but that's irrelevant.)
The point is, I have `cert-manager`, I have a `ClusterIssuer`, so (as I understand it by now) everything the GitLab chart should do is set some annotations on the created `Ingress` resources. `ingress-shim` would automatically create `Certificate` resources and so on.
In the configuration for this chart I am using:
```
global:
hosts:
domain: my.domain.com
ingress:
annotations:
kubernetes.io/tls-acme: true
certmanager:
install: false
certmanager-issuer:
email: my@email.com
```
However, a resource called "gitlab-issuer" is deployed with own configurations for ACME urls, issuer-email and so on unless `global.ingress.configureCertmanager` is false. If I try to prevent that by setting
`global.ingress.configureCertmanager: false` the chart complains that self signed certificates will be used now, which isn't enough for `gitlab-runnner`.
Do I miss something?0.1.3Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/732Installing with own ingress and cert-manager only requests a single certifica...2019-09-03T10:41:42ZGerardInstalling with own ingress and cert-manager only requests a single certificate - documentation issue?I just installed gitlab via helm to a cluster that has nginx-ingress and cert-manager already set up.
The first problem I ran into is #615, but after using helm fetch and removing the offending line from NOTES.txt, the whole thing insta...I just installed gitlab via helm to a cluster that has nginx-ingress and cert-manager already set up.
The first problem I ran into is #615, but after using helm fetch and removing the offending line from NOTES.txt, the whole thing installed.
What I noticed is that minio, the first service to be up and running, got a certificate from letsencrypt just fine. The other 2 services with ingress; gitlab-unicorn and gitlab-registry, **did not**.
The reason for this, as far as I can see, is that all 3 ingresses were deployed with the following setting;
tls.secretName: gitlab-wildcard-tls
The documentation states that the default value for global.ingress.tls.secretName is *none* and the default values for the various secretNames is some unique default value. I was able to circumvent this problem by actually setting the service-specific secretName values in my values.yaml to the suggested defaults. The documentation on this subject is clearly **wrong** and might need updating.0.1.3Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/769Update minimum Helm version to 1.9.x2019-09-03T10:41:32ZDJ Mountneydj@gitlab.comUpdate minimum Helm version to 1.9.x## Summary
We should update our minimum supported Helm version to 1.9.x This is already our recommended Helm version, but our CI is currently running on 1.8 https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/Dockerfile.gitla...## Summary
We should update our minimum supported Helm version to 1.9.x This is already our recommended Helm version, but our CI is currently running on 1.8 https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/Dockerfile.gitlab-charts-build-base
The main reason I want us to upgrade is to be able to use the new before-hook-creation delete policy available in 1.9
This should allow us to clean up some of the problems we have with hooks failing and making it hard to install the chart again, as we can add deletion policies to hooks that we previously didn't because we wanted to keep failures around for debugging.0.1.3DJ Mountneydj@gitlab.comDJ Mountneydj@gitlab.comhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/767Failing spec test for restoring runners from backup2018-10-04T14:58:20ZDJ Mountneydj@gitlab.comFailing spec test for restoring runners from backup## Summary
We are currently seeing test failures in the pipeline for restoring registered runners: https://gitlab.com/charts/gitlab/-/jobs/98554317
This is due to a recent upstream change for 11.4 where the table syntax was changed on ...## Summary
We are currently seeing test failures in the pipeline for restoring registered runners: https://gitlab.com/charts/gitlab/-/jobs/98554317
This is due to a recent upstream change for 11.4 where the table syntax was changed on the runners, https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19625
So our test selector needs to be updated to match.0.1.5DJ Mountneydj@gitlab.comDJ Mountneydj@gitlab.comhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/661Cannot turn off shared-secrets subchart2018-09-27T14:58:40ZMatthias van de MeentCannot turn off shared-secrets subchartIt looks like it is currently impossible to completely turn off the `shared-secrets` subchart.
Expected behaviour:
there is some flag in the values configuration which turns off the `shared-secrets` subchart
Current behaviour:
There i...It looks like it is currently impossible to completely turn off the `shared-secrets` subchart.
Expected behaviour:
there is some flag in the values configuration which turns off the `shared-secrets` subchart
Current behaviour:
There is no such thing. (RBAC can be turned off, but the other templates are dependant on critical configuration values in other charts)0.1.4Jason PlumJason Plumhttps://gitlab.com/gitlab-org/charts/gitlab/-/issues/773Tagging using release tools isn't running pipelines2018-09-21T18:49:42ZDJ Mountneydj@gitlab.comTagging using release tools isn't running pipelines## Summary
Tagging using release tools isn't running pipelines
With the recent changelog changes to the helm release-tools I created this month, the order in which the commits happen during the release has changed. SO now the changelog...## Summary
Tagging using release tools isn't running pipelines
With the recent changelog changes to the helm release-tools I created this month, the order in which the commits happen during the release has changed. SO now the changelog is that last commit before tagging. This is a problem as the release-tools includes ci-skip in the commit message, so our tag pipeline ends up not running in CI.
We need to update the release-tools to ensure our tagged commit doesn't have ci-skip.
As a workaround, you can manually trigger pipelines from the GitLab UI for the tag. This is what I have done for the `1.1.0` release.0.1.3DJ Mountneydj@gitlab.comDJ Mountneydj@gitlab.com