Latency and errors when pods are cycled for git https
This issue will document our findings for cycling webservice pods while taking traffic
We are frequently seeing errors when we deploy git-https (webservice pods) in our zonal clusters on gitlab.com.
We have made a couple changes so far:
- Delay 60seconds before sending the readiness check. gitlab-com/gl-infra/k8s-workloads/gitlab-com!467 (merged) We believe right now that the readiness passes to quickly, this waits a bit longer for the readiness check to pass.
- Update the deployment strategy, so we set
maxUnavailable: 0gitlab-com/gl-infra/k8s-workloads/gitlab-com!471 (diffs)
These changes seem to help, but we are still seeing situations where the number of pods that process git https traffic drops during a deploy, this causes excessive load on the remaining pods, which lowers our apdex score.
It's clear in this graph:
Where the pods taking traffic (50) drops all the way down to 28
I believe what is happening here is that max surge (25%) is bringing up new pods, but we are terminating old pods before the new ones are ready to take traffic.
For maxUnavailable: 0, what is the definition of "available"? Does this mean "ready"? What if only one container is ready (workhorse), but rails isn't? One theory I have is that we pass the readiness for workhorse very quickly, maybe this is causing us to terminate pods too aggressively?
