Migrate prometheus VMs to use SSD data disks
Production Change
Change Summary
Originating Issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16827
Update the data disks for prometheus VMs in GPRD from standard to ssd disks.
Change Details
- Services Impacted - ServicePrometheus
- Change Technician - @cmcfarland
- Change Reviewer - @f_santos
- Time tracking - 120
- Downtime Component - N/A
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60
-
Set label changein-progress /label ~change::in-progress -
Merge the TF change for data disk type: - prometheus-01-inf-gprd.c.gitlab-production.internal
-
Stop prometheus-01-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances stop prometheus-01-inf-gprd-
Create a snapshot of the data disk.
gcloud --project gitlab-production compute snapshots create cr8066-1 --source-disk=prometheus-01-inf-gprd-data --source-disk-zone=us-east1-c-
Detach and delete the data disk from prometheus-01-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances detach-disk prometheus-01-inf-gprd --disk=prometheus-01-inf-gprd-data --zone=us-east1-c gcloud --project gitlab-production compute disks delete prometheus-01-inf-gprd-data --zone=us-east1-c-
Create a new disk from the snapshot that is an SSD
gcloud --project gitlab-production compute disks create prometheus-01-inf-gprd-data --zone=us-east1-c --size=4000 --source-snapshot=cr8066-1 --type=pd-ssd-
Attach the new disk to prometheus-01-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances attach-disk prometheus-01-inf-gprd --disk=prometheus-01-inf-gprd-data --zone=us-east1-c-
Boot and verify the disk is mounted and prometheus is working properly.
gcloud --project gitlab-production compute instances start prometheus-01-inf-gprd- Check Terraform state and import new disk.
-
Run a tf plan for GPRD. -
Import the new disk to replace the missing disk.
tf import "module.prometheus.google_compute_disk.default[0]" projects/gitlab-production/zones/us-east1-c/disks/prometheus-01-inf-gprd-data-
Run a tf plan for GPRD and verify no changes expected.
-
-
Verify prometheus in gprd is still working https://prometheus.gprd.gitlab.net/ -
Verify that the GCP load balancer for prometheus shows the VM as healthy.
-
- prometheus-02-inf-gprd.c.gitlab-production.internal
-
Stop prometheus-02-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances stop prometheus-02-inf-gprd-
Create a snapshot of the data disk.
gcloud --project gitlab-production compute snapshots create cr8066-2 --source-disk=prometheus-02-inf-gprd-data --source-disk-zone=us-east1-d-
Detach and delete the data disk from prometheus-02-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances detach-disk prometheus-02-inf-gprd --disk=prometheus-02-inf-gprd-data --zone=us-east1-d gcloud --project gitlab-production compute disks delete prometheus-02-inf-gprd-data --zone=us-east1-d-
Create a new disk from the snapshot that is an SSD
gcloud --project gitlab-production compute disks create prometheus-02-inf-gprd-data --zone=us-east1-d --size=4000 --source-snapshot=cr8066-2 --type=pd-ssd-
Attach the new disk to prometheus-02-inf-gprd.c.gitlab-production.internal
gcloud --project gitlab-production compute instances attach-disk prometheus-02-inf-gprd --disk=prometheus-02-inf-gprd-data --zone=us-east1-d-
Boot and verify the disk is mounted and prometheus is working properly.
gcloud --project gitlab-production compute instances start prometheus-02-inf-gprd- Check Terraform state and import new disk.
-
Run a tf plan for GPRD. -
Import the new disk to replace the missing disk.
tf import "module.prometheus.google_compute_disk.default[1]" projects/gitlab-production/zones/us-east1-d/disks/prometheus-02-inf-gprd-data-
Run a tf plan for GPRD and verify no changes expected.
-
-
Verify prometheus in gprd is still working https://prometheus.gprd.gitlab.net/ -
Verify that the GCP load balancer for prometheus shows the VM as healthy.
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60
-
The only known rollback is probably to rebuilt the VM that is lost. Or to restore the snapshot to a standard disk. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Metric Name
- Location: Dashboard URL
- What changes to this metric should prompt a rollback: Describe Changes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Cameron McFarland