Installing in Amazon EKS using the Helm Chart does not configure the Load Balancer correctly.
Summary
When installing to EKS it would appear that something is not being configured correctly with the Load Balancer or NodePorts (or possibly it's something else).
Steps to reproduce
- Create a new EKS Cluster
- Install Helm
- Run:
helm upgrade --install gitlab gitlab/gitlab --set global.hosts.domain=xxxxx.com --set global.ingress.configureCertmanager=false --namespace gitlab
Example Project
N/A
What is the current bug behavior?
A Classic Load Balancer is created that points to the node instances on ports that would appear to be correct. For example, it created 3 listeners:
Load Balancer Protocol | Load Balancer Port | Instance Protocol | Instance Port | Cipher | SSL Certificate |
---|---|---|---|---|---|
TCP | 80 | TCP | 30099 | N/A | N/A |
TCP | 22 | TCP | 31327 | N/A | N/A |
TCP | 443 | TCP | 30780 | N/A | N/A |
With Health check Ping Target: HTTP:30315/healthz
kubectl get services --namespace gitlab
gives me:
NAME | TYPE | CLUSTER-IP | EXTERNAL-IP | PORT(S) | AGE |
---|---|---|---|---|---|
gitlab-gitaly | ClusterIP | None | 8075/TCP,9236/TCP | 17h | |
gitlab-gitlab-shell | ClusterIP | xxx.xx.116.101 | 22/TCP | 17h | |
gitlab-minio-svc | ClusterIP | xxx.xx.188.77 | 9000/TCP | 17h | |
gitlab-nginx-ingress-controller | LoadBalancer | xxx.xx.179.23 | internal-a6e6... | 80:30099/TCP,443:30780/TCP,22:31327/TCP | 17h |
gitlab-nginx-ingress-controller-metrics | ClusterIP | xxx.xx.23.253 | 9913/TCP | 17h | |
gitlab-nginx-ingress-controller-stats | ClusterIP | xxx.xx.12.5 | 18080/TCP | 17h | |
gitlab-nginx-ingress-default-backend | ClusterIP | xxx.xx.35.28 | 80/TCP | 17h | |
gitlab-postgresql | ClusterIP | xxx.xx.86.12 | 5432/TCP | 17h | |
gitlab-prometheus-server | ClusterIP | xxx.xx.69.88 | 80/TCP | 17h | |
gitlab-redis | ClusterIP | xxx.xx.74.175 | 6379/TCP,9121/TCP | 17h | |
gitlab-registry | ClusterIP | xxx.xx.253.204 | 5000/TCP | 17h | |
gitlab-unicorn | ClusterIP | xxx.xx.215.147 | 8080/TCP,8181/TCP | 17h |
kubectl get ingresses --namespace gitlab
gives me:
NAME | HOSTS | ADDRESS | PORTS | AGE |
---|---|---|---|---|
gitlab-minio | minio.xxxxx.com | internal-a6e6... | 80, 443 | 17h |
gitlab-registry | registry.xxxxx.com | internal-a6e6... | 80, 443 | 17h |
gitlab-unicorn | gitlab.xxxxx.com | internal-a6e6... | 80, 443 | 17h |
The problem appears to be that none of the ports are open on the nodes.
What is the expected correct behavior?
I would expect the Load Balancer to be created pointing to nodes or IPs with the ports open and ready to receive communications, allowing for it to come online.
Relevant logs and/or screenshots
[ec2-user@ip-xx-xxx-xx-192 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f486ed3529ef 16f501ca5d9c "/scripts/entrypoint…" 17 hours ago Up 17 hours k8s_gitlab-workhorse_gitlab-unicorn-7f49485cf-d9clt_gitlab_8084a21b-50f1-11e9-aa59-12eb50877ffc_0
b212f395d21d c5541f8226ab "/scripts/entrypoint…" 17 hours ago Up 17 hours k8s_unicorn_gitlab-unicorn-7f49485cf-d9clt_gitlab_8084a21b-50f1-11e9-aa59-12eb50877ffc_0
5f8ed082e1bf 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 18 hours ago Up 18 hours k8s_POD_gitlab-unicorn-7f49485cf-d9clt_gitlab_8084a21b-50f1-11e9-aa59-12eb50877ffc_0
4b5a3a9ea17f cc866859f8df "/bin/prometheus --c…" 18 hours ago Up 18 hours k8s_prometheus-server_gitlab-prometheus-server-69d46f5d6c-57j5z_gitlab_6f1897fa-50f1-11e9-aa59-12eb50877ffc_0
9865a41092e8 b70d7dba98e6 "/configmap-reload -…" 18 hours ago Up 18 hours k8s_prometheus-server-configmap-reload_gitlab-prometheus-server-69d46f5d6c-57j5z_gitlab_6f1897fa-50f1-11e9-aa59-12eb50877ffc_0
587c6c637ec8 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 18 hours ago Up 18 hours k8s_POD_gitlab-prometheus-server-69d46f5d6c-57j5z_gitlab_6f1897fa-50f1-11e9-aa59-12eb50877ffc_0
6126fc7ff1eb f32a97de94e1 "/entrypoint.sh /etc…" 18 hours ago Up 18 hours k8s_registry_gitlab-registry-5f8d95d4c5-sfncz_gitlab_6f4e0405-50f1-11e9-aa59-12eb50877ffc_0
cd4f7c7d17f2 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 18 hours ago Up 18 hours k8s_POD_gitlab-registry-5f8d95d4c5-sfncz_gitlab_6f4e0405-50f1-11e9-aa59-12eb50877ffc_0
11b0dc9c8ef2 846921f0fe0e "/server" 18 hours ago Up 18 hours k8s_nginx-ingress-default-backend_gitlab-nginx-ingress-default-backend-b75f49dff-zjzk6_gitlab_6ec4a96a-50f1-11e9-aa59-12eb50877ffc_0
75284a4d4c6b 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 18 hours ago Up 18 hours k8s_POD_gitlab-nginx-ingress-default-backend-b75f49dff-zjzk6_gitlab_6ec4a96a-50f1-11e9-aa59-12eb50877ffc_0
84a0f36895a8 a3f21ec4bd11 "/entrypoint.sh /ngi…" 18 hours ago Up 18 hours k8s_nginx-ingress-controller_gitlab-nginx-ingress-controller-74895bf5d-db2bm_gitlab_6e8da75e-50f1-11e9-aa59-12eb50877ffc_0
5a23ce71a6b6 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 18 hours ago Up 18 hours k8s_POD_gitlab-nginx-ingress-controller-74895bf5d-db2bm_gitlab_6e8da75e-50f1-11e9-aa59-12eb50877ffc_0
b4fe49634450 gcr.io/kubernetes-helm/tiller "/tiller" 19 hours ago Up 19 hours k8s_tiller_tiller-deploy-69458576b-rsmj6_kube-system_f56cafea-50e1-11e9-aa59-12eb50877ffc_0
f33a02a0df52 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_tiller-deploy-69458576b-rsmj6_kube-system_f56cafea-50e1-11e9-aa59-12eb50877ffc_0
91c9fc9316d8 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/coredns "/coredns -conf /etc…" 19 hours ago Up 19 hours k8s_coredns_coredns-7bcbfc4774-rnww9_kube-system_1e75b4e5-50e1-11e9-aa59-12eb50877ffc_0
8cfa883c2846 k8s.gcr.io/heapster-amd64 "/heapster --source=…" 19 hours ago Up 19 hours k8s_heapster_heapster-84c9bc48c4-bg7b5_kube-system_1e5f5182-50e1-11e9-aa59-12eb50877ffc_0
dda8ca6815e4 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_coredns-7bcbfc4774-rnww9_kube-system_1e75b4e5-50e1-11e9-aa59-12eb50877ffc_0
c5859437b98c 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_heapster-84c9bc48c4-bg7b5_kube-system_1e5f5182-50e1-11e9-aa59-12eb50877ffc_0
4cd54f651068 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni "/bin/sh -c /app/ins…" 19 hours ago Up 19 hours k8s_aws-node_aws-node-zc2rw_kube-system_e0718faf-50e0-11e9-aa59-12eb50877ffc_0
e12261496da7 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy "/bin/sh -c 'kube-pr…" 19 hours ago Up 19 hours k8s_kube-proxy_kube-proxy-5hvgx_kube-system_e07187b6-50e0-11e9-aa59-12eb50877ffc_0
d14c243951c2 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_aws-node-zc2rw_kube-system_e0718faf-50e0-11e9-aa59-12eb50877ffc_0
085778c0bc3e 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-proxy-5hvgx_kube-system_e07187b6-50e0-11e9-aa59-12eb50877ffc_0
Output of checks
Results of GitLab environment info
Results of GitLab application Check
Possible fixes
It looks like it is an issue with the nginx-ingress helm chart version that is being referenced (nginx-ingress-0.30.0-1). The latest stable chart (nginx-ingress-1.4.0) seems to be working correctly. A workaround is to --set nginx-ingress.controller.service.externalTrafficPolicy="Cluster"
when installing through helm.