Ingress endpoint - ELB subnets (availability zones) configuration
Gitlab Managed Ingress-controller configures the AWS ELB (Load Balancer) ingress endpoint with only one Availability Zone subnet.
Steps to reproduce
- Add an AWS EKS (kubernetes) cluster integration into a Gitlab project
- Install Ingress. The ingress endpoint will return the AWS ELB cname
- Check the Availability Zones under instances inside the ELB configuration
- It will only show one of the 6 availability zones in the default us-east1 region
- The worker nodes usually run in more than one subnets belonging to different availability zones.
What is the current bug behavior?
Currently the ingress endpoint is automatically configured with only one subnet belonging to one of the six availability zones supported by the AWS region. This causes a connectivity issue between the ingress endpoint (load balancer) and the cluster's worker nodes that run in different subnets or availability zones.
What is the expected correct behavior?
It is expected to have the ELB automatically configured with more than just one availability zone, a minimum of three will be ideal.
Relevant logs and/or screenshots
Output of checks
This bug happens on GitLab.com