[GSTG] - Remove redundant snapshots from staging database data volumes
Production Change
Change Summary
Removes snapshots from data directories for databases that are all copies of other databases which are already backed up. Should the need to restore them arise they can be restored from other snapshots given data is consistent across the cluster.
Change Details
- Services Impacted - None
- Change Technician - @daveyleach
- Change Reviewer - @rhenchen.gitlab
- Time tracking - 30 minutes for terraform apply to complete
- Downtime Component - None
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress
-
Apply the merge request https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/8642 -
Once terraform change is complete gather a list of affected disks
gcloud compute --project gitlab-staging-1 disks list --format "value(name)" --filter='labels.pet_name~archive|delayed' > /tmp/redundant_snapshot_staging_disks
-
Confirm they're the same disks as the ones found in the terraform plan for https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/8642 -
Create a list of snapshots to delete and execute deletions (the below script only echo's the commands)
cat /tmp/redundant_snapshot_staging_disks | while read disk; do gcloud compute snapshots list --project gitlab-staging-1 --filter="sourceDisk=${disk}" --format "value(name)" | while read snapshot; do echo "gcloud compute snapshots delete --project gitlab-staging-1 ${snapshot}" ; done ; done
-
Perform the deletion once you've validated the commands look correct
cat /tmp/redundant_snapshot_staging_disks | while read disk; do gcloud compute snapshots list --project gitlab-staging-1 --filter="sourceDisk=${disk}" --format "value(name)" | while read snapshot; gcloud compute snapshots delete --quiet --project gitlab-staging-1 ${snapshot} ; done ; done
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
Once the snapshots have been deleted they can not be recovered
-
Revert the MR and apply https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/8642 -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
This change only affects backups and no running systems so general observations of running system seems appropriate
- Metric: transactions_primary SLI Error Ratio
- Location: Dashboard URL
- What changes to this metric should prompt a rollback: error ratio > 0.6% soft SLO for more than 10 minutes.
- Metric: rails_primary_sql SLI Apdex
- Location: Dashboard URL
- What changes to this metric should prompt a rollback: APDEX < 99.4% soft SLO for more than 10 minutes.
- Metric: Snapshot count of archive and delayed nodes should reduce to 0 -
gcloud compute --project gitlab-staging-1 disks list --format "value(name)" --filter='labels.pet_name~archive|delayed' | while read disk; do gcloud compute snapshots list --project gitlab-staging-1 --filter="sourceDisk=${disk}" --format "value(name)"; done | wc -l
- Metric: Snapshot count of non archive|delayed nodes should remain the roughly the same
gcloud compute --project gitlab-staging-1 disks list --format "value(name)" --filter='labels.pet_name!~archive|delayed' | while read disk; do gcloud compute snapshots list --project gitlab-staging-1 --filter="sourceDisk=${disk}" --format "value(name)"; done | wc -l
Change Reviewer checklist
C3:
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by David Leach