Unable to upgrade chart to version 6.0.0
Summary
Trying to upgrade gitlab from chart version 5.10.3 to 6.0.0. Helm upgrade will complete without any issues, all pods are up and running. Couple minutes after the upgrade I'm unable to reach the gitlab url.
Logs from our AWS Classic load balancer has HTTP 504 errors. If do a GET on load balancer DNS name I get "default backend - 404"
No errors found in nginx logs, but this is from one of the gitlab-workhorse container(relevant?):
gitlab-workhorse logs:
{"error":"GetGeoProxyData: Received HTTP status code: 502","geoProxyBackend":{"Scheme":"","Opaque":"","User":null,"Host":"","Path":"","RawPath":"","ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""},"level":"error","msg":"Geo Proxy: Unable to determine Geo Proxy URL. Fallback on cached value.","time":"2022-05-25T09:08:59Z"} {"correlation_id":"","duration_ms":0,"error":"badgateway: failed to receive response: dial tcp 127.0.0.1:8080: connect: connection refused","level":"error","method":"GET","msg":"","time":"2022-05-25T09:09:09Z","uri":""}
Don't really know how to troubleshot this issue. Any pointers would be highly appreciated.
Been upgrading almost weekly for +2 years without any issue using same upgrade script.
Configuration used
USER-SUPPLIED VALUES:
gitlab:
toolbox:
backups:
cron:
enabled: true
extraArgs: --skip artifacts --skip packages --skip uploads
schedule: '* 1 * * *'
objectStorage:
config:
key: xxx
secret: xxx
gitlab-runner:
install: false
global:
appConfig:
artifacts:
bucket: xxx
connection:
key: xxx
secret: xxx
backups:
bucket: xxx
tmpBucket: xxx
lfs:
bucket: xxx
connection:
key: xxx
secret: xxx
omniauth:
allowSingleSignOn:
- cognito
enabled: true
providers:
- key: xxx
secret: xxx
packages:
bucket: xxx
connection:
key: xxx
secret: xxx
uploads:
bucket: xxx
connection:
key: xxx
secret: xxx
edition: ce
hosts:
domain: xxx
ingress:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 0
configureCertmanager: false
kas:
enabled: false
minio:
enabled: false
psql:
host: xxx
password:
key: xxx
secret: xxx
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: xxx
enabled: true
shell:
port: xxx
nginx-ingress:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
service.beta.kubernetes.io/aws-load-balancer-internal: false
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443
postgresql:
install: false
Current behavior
unresponsive url
Expected behavior
responsive url
Versions
- Chart: 6.0.0
- Platform:
- Cloud: EKS
- Kubernetes: (
kubectl version
)- Client: v1.20.4
- Server: v1.21.9
- Helm: (
helm version
)- Client: v3.2.0
- Server: v3.2.0
Note
I'm able to replicate the issue in my test environment with above stated values. Chart 5.10.3 works just fine, helm upgrade to 6.0.0 results in unresponsive url.
Root Cause:
Allowed ciphers was changed in commit: a8483c25
Looks like current(chart 6.0.0) ciphers are not compatible with recommended(and default?) AWS lb policy : ELBSecurityPolicy-2016-08.
If you have an existing load balancer with an SSL negotiation configuration that does not use the latest protocols and ciphers, we recommend that you update your load balancer to use ELBSecurityPolicy-2016-08
I think you need to use a custom policy or use old values. Following works for me: --set nginx-ingress.controller.config.ssl-ciphers="ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"