2022-11-07: [GSTG] Deploy & migrate Consul to k8s
Production Change
Change Summary
We're currently running a Consul server cluster using a very old version (1.7.2) of Consul on physical VMs in each of our environments (except pre as it has been migrated to k8s already).
This CR is the result of our testing & planning that we've carried out in the db-benchmarking and pre environments to deploy the latest version (of both: the consul-k8s chart and Consul itself) in k8s, migrate both VMs and k8s clients across to k8s and deprecate the Consul server cluster running on VMs whilst trying to minimize the impact throughout the process.
End result of this CR:
- Consul server cluster (5 replicas) running version
1.13.3in k8s (and latest chart version) - Consul clients in k8s on version
1.13.3 - Consul clients in VMs on version
1.13.3and their retry-join set to be the Consul cluster in k8s - Consul server cluster on VMs decommissioned
- Rails app using a k8s service for DNS resolution (instead of
localhost:8600)
Epic: &789 (closed)
Issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16268
Change Details
- Services Impacted - ServiceConsul
-
Change Technician -
@gsgl - Change Reviewer - @f_santos
- Time tracking - ~4 hours
- Downtime Component - none
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress -
Deploy a new Consul Server cluster in k8s with the latest chart ( 0.49.0) and consul image1.7.2(MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1222 (merged))- NOTE:
uineeds to bedisabledotherwise the chart definesui_config, which is not valid in 1.7.x (Consul will fail to start) - We also need
clientto be disabled otherwise there'll be ports clash with the existingconsulrelease - Set replicas to
5until we've finished the upgrade process
- NOTE:
-
On one of the Consul VM server nodes, take a snapshot: consul snapshot save <file> -
Have the new k8s cluster join the existing VM server cluster - on one of the Consul VM server nodes, run: consul join consul-internal.<env>.gke.gitlab.net -
Update the start_join list on the VMs so they can use the exposed service ( consul-internal.<env>.gke.gitlab.net) on the Consul k8s cluster (example MR) - MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2467 -
Update client.join:in helmfile values toconsul-internal.<env>.gke.gitlab.netinstead of sourcing the value from a Chef roles file (MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1226 (merged)) -
Clean up failed hosts: consul members | grep failed | awk '{print $1}' | xargs -I{} consul force-leave -prune {} -
Get Consul member list and peer list and save to a file. -
consul members | tail -n+2 | sort -k 1 > consul-members.list -
consul operator raft list-peers | tail -n+2 | sort -k 1 > consul-peers.list
-
-
On each Consul server VM: -
Disable Chef -
Shutdown Consul ( consul leave) -
On another Consul host, ensure it is listed as leftinconsul members -
On another Consul host: consul force-leave -prune <consul host that was shutdown> -
Disable Consul to ensure it doesn't start up again: sudo systemctl disable consul
-
-
Verify Consul Member List matches the one previously gathered minus the Consul Server VMs. -
Take a snapshot: consul snapshot save pre-consul-upgrade-1-8-19.snap -
Bump k8s Consul server version to 1.8.19(see doco) - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1231 (merged) -
Take a snapshot: consul snapshot save pre-consul-upgrade-1-10-12.snap -
Bump k8s Consul server version to 1.10.12- MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1232 (merged) -
Take a snapshot: consul snapshot save pre-consul-upgrade-1-12-6.snap -
Bump k8s Consul server version to 1.12.6- MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1233 (merged) -
Take a snapshot: consul snapshot save pre-consul-upgrade-1-13-3.snap -
Bump k8s Consul server version to 1.13.3 -
Increase server.replicasto10 -
Enable Consul UI -
Update the app DB load balancing config to temporarily use the nameserver consul-dns.gstg.gke.gitlab.net:53(UDP) for DNS resolution -
Remove consulrelease - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1239 (merged) -
Remove failed/left clients: consul members | grep -E "failed|left" | awk '{print $1}' | xargs -tI{} consul force-leave -prune {} -
Enable clientin theconsul-glrelease - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1240 (merged) -
Update the app DB load balancing config to use the nameserver consul-gl-consul-dns.consul.svc.cluster.local:53for DNS resolution - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2286 (merged) -
Reduce replicas for the serverdown to5manually (one at a time) then file an MR to make it official - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1244 (merged) -
Bump all Consul clients on VMs to 1.13.3(MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2468) -
Ensure this change is deployed to Chef and Chef client runs on the VMs (or wait ~1hr): `knife ssh -C 5 'roles:gstg-base' 'sudo chef-client' -
Cycle Consul on Patroni clusters: - For each Patroni cluster (main, CI, registry):
-
Disable Chef on all Patroni nodes (otherwise it will remove the cluster from maint mode when it runs) -
Put the Patroni cluster in maintenance mode to avoid failovers when cycling Consul: sudo gitlab-patronictl pause --wait <cluster name> -
One node at a time, cycle Consul: sudo systemctl restart consul -
Once all nodes are finished, unpause the cluster: sudo gitlab-patronictl resume --wait <cluster name> -
Enable Chef on all Patroni nodes
-
- For each Patroni cluster (main, CI, registry):
-
Cycle non-Patroni Consul agents (the Consul recipe installs Consul but doesn't restart it) -
Lower Consul client reconnect timeout so that failed clients get reaped quicker. This requires Consul v1.9 - MR: gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles!1251 (merged) -
Update config-mgmt to destroy the Consul VM cluster - MR: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4526 -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
It depends on what step we're up to when the rollback is initiated, but since we are taking Consul snapshots before every version upgrade, we have the ability to downgrade versions and restore the data from snapshot.
Monitoring
Key metrics to observe
-
Metric: Platform Triage - Apdex and Error Ratios
- Location: https://dashboards.gitlab.net/d/general-triage/general-platform-triage?orgId=1
- What changes to this metric should prompt a rollback: significant drop in apdex and/or error ratio.
-
Metric: Patroni - Apdex and Error Ratios in Platform Triage
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1
- What changes to this metric should prompt a rollback: significant drop in apdex and/or error ratio.
-
Metric: Sidekiq Overview - Agg Queue Length
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview
- What changes to this metric should prompt a rollback: if queues are growing & not coming down, and restarting sidekiq deployments doesn't help.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.