Change kubernetes webservice/sidekiq pods to use node local consul pod via `NodePort` service`
Production Change
Change Summary
As part of gitlab-org/gitlab#271575 (closed) we have noticed that we occasionally get errors related to Kubernetes pods talking to consul to determine the list of database slaves to use. After much debugging we believe it's related to node scaling events, due to the fact that consul is run inside Kubernetes as a daemonset, which ignore normal pod evictions.
In order to alleviate this issue and rule out scaling events happening to other nodes in the cluster affecting consul requests, we want to configure our webservice and sidekiq pods to only talk to their local consul instance running on the same cluster. To this end, we want add an initContainer to take the name of the node the pod is running on and add it to a location our configuration can read.
This has been applied and tested in staging gitlab-com/gl-infra/k8s-workloads/gitlab-com!832 (merged) and gitlab-com/gl-infra/k8s-workloads/gitlab-com!833 (merged)
Change Details
- Services Impacted - git + git-https, sidekiq, websockets
- Change Technician - @ggillies
- Change Criticality - C2
- Change Type - changeunscheduled
- Change Reviewer - @skarbek
- Due Date - 2020-05-04 23:59 UTC
- Time tracking - 120 minutes
- Downtime Component - N/A
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 2 minutes
-
Set label changein-progress on this issue -
Open triage dashboard and https://log.gprd.gitlab.net/goto/d1fe469fdade34e0255efee813c0eeae
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60 minutes
-
Confirm that kubernetes service consul-consul-nodeport
exists in theconsul
namespace on all 4 production gke clusters
ssh console-01-sv-gprd.c.gitlab-production.internal
kubectl --context gke_gitlab-production_us-east1_gprd-gitlab-gke -n consul get service/consul-consul-nodeport
kubectl --context gke_gitlab-production_us-east1-b_gprd-us-east1-b -n consul get service/consul-consul-nodeport
kubectl --context gke_gitlab-production_us-east1-c_gprd-us-east1-c -n consul get service/consul-consul-nodeport
kubectl --context gke_gitlab-production_us-east1-d_gprd-us-east1-d -n consul get service/consul-consul-nodeport
-
Merge and apply MR gitlab-com/gl-infra/k8s-workloads/gitlab-com!834 (merged)
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 10
-
Confirm that the number of connections to the primary/secondary has not changed significantly by looking at
-
Watch https://log.gprd.gitlab.net/goto/d1fe469fdade34e0255efee813c0eeae to confirm there are no error messages above levels what we were seeing before
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60
-
Create rollback MR for gitlab-com/gl-infra/k8s-workloads/gitlab-com!834 (merged) and apply
Monitoring
Key metrics to observe
- Metric:
pgbouncer_pools_client_active_connections{fqdn=~"patroni.*gprd.c.gitlab-production.internal",database="gitlabhq_production",user="gitlab"}
- Location: Thanos PromQL
- What changes to this metric should prompt a rollback: Growth in connections to the master while connections to all slaves go down
Summary of infrastructure changes
-
Does this change introduce new compute instances? No -
Does this change re-size any existing compute instances? No -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc? No
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
There are currently no active incidents.