Change the schedule of automatic database reindexing to not run on Sundays.
Production Change
Change Summary
The gitlab project performs reindexing of bloated database indexes on the weekends in order to reduce thier bloat.
This process is controlled by a cron script on the deployer node that calls the gitlab:db:reindexing job: https://ops.gitlab.net/gitlab-com/gl-infra/chef-repo/-/blob/master/roles/gprd-base-deploy-node.json#L90-L95
Before this change request, the cron runs every hour at minute 12 on Saturday and Sunday. Each cron run tries to reindex up to two indexes, cancelling itself if a previous reindexing is still running.
This led to a production incident: #7852 (closed) where a lock was held by reindexing through to Monday morning, preventing vacuum, and leading to dead tuple buildup and query degredation.
As a short-term mitigation for potential incidents due to long-running reindexing, change the reindexing operation to only run on Saturdays, rather than Saturdays and Sundays.
Change Details
- Services Impacted - Gitlab database (both main and ci)
- Change Technician - DRI for the execution of this change
- Change Reviewer - DRI for the review of this change
- Time tracking - Time, in minutes, needed to execute all change steps, including rollback
- Downtime Component - No downtime necessary
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress -
Update the cron configuration at https://ops.gitlab.net/gitlab-com/gl-infra/chef-repo/-/blob/master/roles/gprd-base-deploy-node.json#L90-L95 to only run on Saturdays -
Apply the configuration to the deployer node -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
If reindex actions are occurring during periods of high load due to a mistake with the cron, and we need to turn off all reindexing, begin by disabling the database_reindexingfeature flag. -
Rollback the update to cron configuration by reverting the MR that updated it. -
Apply the configuration to the deployer node. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Running reindexing actions during the weekend
- Location: Reindexing actions can be observed in two places:
- They appear on the grafana postgres cpu dashboard: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&var-prometheus=Global&var-environment=gprd&var-type=patroni&viewPanel=13&from=now-7d&to=now
- In a search for running
REINDEX CONCURRENTLYevents inpg_stat_statements: https://log.gprd.gitlab.net/goto/41784f90-646d-11ed-85ed-e7557b0a598c
- What changes to this metric should prompt a rollback:
- Two metrics could prompt a rollback:
- More seriously, reindex actions occurring outside of the Saturday - early Sunday UTC time range, especially if they happen during business hours.
- In this case, proceed by disabling the
database_reindexingfeature flag as described in the rollback steps.
- In this case, proceed by disabling the
- Less seriously, reindex actions not occurring during the expected Saturday time window.
- In this case, proceed by reverting the change to cron configuration.
- More seriously, reindex actions occurring outside of the Saturday - early Sunday UTC time range, especially if they happen during business hours.
- Two metrics could prompt a rollback:
- Location: Reindexing actions can be observed in two places:
- Metric: Database index bloat
- Location: https://dashboards.gitlab.net/d/alerts-sat_pg_btree_bloat/alerts-pg_btree_bloat-saturation-detail?orgId=1
- What changes to this metric should prompt a rollback:
- Over the several weeks following this change, if bloat creeps up above 30%, it could indicate that the shorter reindexing duration cannot process enough indexes.
- In this case, we should revert so that more indexes can be reindexed per weekend.
- There is already a soft SLO of 30% bloat on this alert.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.