Reindex top 25 indexes on registry DB with the highest absolute bloat/size
Production Change
Change Summary
In https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/39 we have identified that the registry DB index bloating has increased substantially. We believe most of the increase happened during Phase 2 of the registry migration (gitlab-org&5523 (closed)) during which the database was intensively inflated with data over several months.
As part of the investigation, we have created a list with details about all indexes, extracted with pgstatindex (link). We have then reindexed all where pgstatindex reported a fragmentation above 50%, split into two batches and change requests: #7998 (closed) and #8045 (closed).
After digesting the results, the reported bloat did not decrease that much. After further discussion (link), we identified that the index bloat metrics that we use rely on a different estimation mechanism.
This change request is to reindex the top 25 indexes by absolute bloat/size, based on the estimation algorithm that we're using for monitoring/alerting index bloat. The top 100 list can be found here. Looking at a production clone, I can see these indexes range from 3.3GB to 1.1GB. So these are far larger than the ones from the previous two CRs (max was 758MB), which is why I propose targeting a smaller batch to avoid a long-running change. There will be at least one more CR to tackle the remaining.
Besides bringing down the index bloat metric, we also identified that most of the indexes in this set are related to the layers table partitions, for which we've recently seen evidence of query slowdowns that were greatly improved by reindexing affected indexes on a test env (gitlab-org/container-registry#779 (closed)). So this should help with performance as well.
Change Details
- Services Impacted - ServiceContainer Registry
- Change Technician - @alexander-sosna
- Change Reviewer - @stomlinson
- Time tracking - 45 minutes
- Downtime Component - NA
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 40 minutes
-
Set label changein-progress /label ~change::in-progress -
Execute the following script on the production database: -- enable timing -- \timing -- capture the current time SELECT now(); -- capture index size before reindexing \di+ partitions.layers_p_24_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_24_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_33_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_33_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_6_top_level_namespace_id_repository_id_manifest_id_key \di+ partitions.layers_p_6_top_level_namespace_id_repository_id_id_digest_key \di+ public.pk_gc_tmp_blobs_manifests \di+ partitions.layers_p_10_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_49_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_49_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_10_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_11_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_11_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.repository_blobs_p_24_top_level_namespace_id_repository_id__key \di+ partitions.layers_p_53_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_53_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_35_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_35_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_40_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.gc_blobs_layers_p_55_pkey \di+ partitions.gc_blobs_layers_p_55_digest_layer_id_key \di+ partitions.layers_p_40_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_24_digest_idx \di+ partitions.layers_p_24_pkey \di+ partitions.layers_p_61_top_level_namespace_id_repository_id_id_digest_key -- reindex REINDEX INDEX CONCURRENTLY partitions.layers_p_24_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_24_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_33_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_33_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_6_top_level_namespace_id_repository_id_manifest_id_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_6_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY public.pk_gc_tmp_blobs_manifests; REINDEX INDEX CONCURRENTLY partitions.layers_p_10_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_49_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_49_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_10_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_11_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_11_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.repository_blobs_p_24_top_level_namespace_id_repository_id__key; REINDEX INDEX CONCURRENTLY partitions.layers_p_53_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_53_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_35_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_35_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_40_top_level_namespace_id_repository_id_id_digest_key; REINDEX INDEX CONCURRENTLY partitions.gc_blobs_layers_p_55_pkey; REINDEX INDEX CONCURRENTLY partitions.gc_blobs_layers_p_55_digest_layer_id_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_40_top_level_namespace_id_repository_id_manifest_i_key; REINDEX INDEX CONCURRENTLY partitions.layers_p_24_digest_idx; REINDEX INDEX CONCURRENTLY partitions.layers_p_24_pkey; REINDEX INDEX CONCURRENTLY partitions.layers_p_61_top_level_namespace_id_repository_id_id_digest_key; -- capture the current time SELECT now(); -- capture index size after reindexing \di+ partitions.layers_p_24_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_24_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_33_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_33_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_6_top_level_namespace_id_repository_id_manifest_id_key \di+ partitions.layers_p_6_top_level_namespace_id_repository_id_id_digest_key \di+ public.pk_gc_tmp_blobs_manifests \di+ partitions.layers_p_10_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_49_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_49_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_10_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_11_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_11_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.repository_blobs_p_24_top_level_namespace_id_repository_id__key \di+ partitions.layers_p_53_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_53_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_35_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_35_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.layers_p_40_top_level_namespace_id_repository_id_id_digest_key \di+ partitions.gc_blobs_layers_p_55_pkey \di+ partitions.gc_blobs_layers_p_55_digest_layer_id_key \di+ partitions.layers_p_40_top_level_namespace_id_repository_id_manifest_i_key \di+ partitions.layers_p_24_digest_idx \di+ partitions.layers_p_24_pkey \di+ partitions.layers_p_61_top_level_namespace_id_repository_id_id_digest_key -
Set label changecomplete /label ~change::complete
Rollback
Not applicable.
Monitoring
Key metrics to observe
- Metric: Registry DB SLI Apdex
- Location: https://dashboards.gitlab.net/d/registry-main/registry-overview?orgId=1&viewPanel=3734296734
- What changes to this metric should prompt a rollback: While there are no rollback steps here, the execution of the script should be halted if a noticeable dip in the apdex is observed.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.