2022-11-21: [GPRD] Upgrade Consul agents in k8s to 1.13.3 (3/5)
Production Change
Change Summary
We're currently running a Consul server cluster on physical VMs using a very old unsupported version of Consul (1.7.2). This CR is just one of several with the end goal being to deploy a Consul server cluster in k8s using the latest version of Consul (using the official and latest consul-k8s chart), decommission the Consul server cluster running on VMs and upgrade/migrate all Consul clients running in k8s and VMs.
We have carried out this process in staging (see CR). It was largely successful with a few lessons learnt along the way that will hopefully make the production deploy even smoother.
This CR aims to:
- Increase server replicas to
10(to handle DNS traffic while we destroy/create Consul clients) - Update app config to use a temporary Consul DNS iLB backed by Consul server agents
- Remove
consulrelease - Enable
clientin theconsul-glrelease - Update app config to use the DNS SVC (ie. local node Consul agent or another if unavailable)
- Reduce server replicas to
5
Issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16268
Epic: &844 (closed)
Change Details
- Services Impacted - ServiceConsul ServiceWeb ServiceWebsockets ServiceSidekiq ServiceGit ServiceAPI
-
Change Technician -
@gsgl - Change Reviewer - @f_santos
- Time tracking - 1.5hr
- Downtime Component - none
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 90min
-
Set label changein-progress /label ~change::in-progress -
Increase server.replicasto10- MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1340 (merged) -
Update app DB load balancing config to temporarily use the nameserver consul-dns.gprd.gke.gitlab.net:53(TCP) for DNS resolution - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2333 (merged) -
Remove consulrelease - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1341 (merged) -
Enable clientin theconsul-glrelease - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1342 (merged) -
Update app DB load balancing config to use the nameserver consul-gl-consul-dns.consul.svc.cluster.local:53(TCP) for DNS resolution -
Reduce replicas for the serverdown to5manually (one at a time) then file an MR to make it official -
Deploy Consul 1.13.3on VMs by updating the<env>-base.jsonrole in Chef (this won't actually cycle Consul - only install it) - MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2508 -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
This is the full list of reverts but it depends at what stage of the rollout a rollback is performed:
-
Revert change to the app DB load balancing config (ie. the 4x nameserversettings in https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/master/releases/gitlab/values/gprd.yaml.gotmpl) -
Revert the consulrelease uninstall (revert MR: xxx) -
Revert the change that enabled clients in the consul-glrelease (revert MR: xxx) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: Platform Triage - Apdex and Error Ratios
- Location: https://dashboards.gitlab.net/d/general-triage/general-platform-triage?orgId=1
- What changes to this metric should prompt a rollback: significant drop in apdex and/or error ratio.
-
Metric: Patroni - Apdex and Error Ratios in Platform Triage
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1
- What changes to this metric should prompt a rollback: significant drop in apdex and/or error ratio.
-
Metric: Sidekiq Overview - Agg Queue Length
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview
- What changes to this metric should prompt a rollback: if queues are growing & not coming down, and restarting sidekiq deployments doesn't help.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.