Reindex index_notes_on_note_trigram on notes table
Production Change
Change Summary
We would like to re-index the index_notes_on_note_trigram index on the notes table. The index is bloated (gitlab-org/gitlab!61430 (comment 576084730)) and can lead to some very slow inserts or updates (gitlab-org/gitlab#218410 (comment 569072680)). These inserts/updates are frequently timing out, 3,700 + over the last 7 days (https://log.gprd.gitlab.net/goto/83e03b61e3fffb5552a4dc3ea9f4034d), and frequent in the postgres slog logs, 16,000 + over the last 7 days (https://log.gprd.gitlab.net/goto/9f4b966f26ad2eb815e1b470d52606ee). After reindexing this index, it appears that these numbers will be a lot lower, reducing load on the database, and improving performance of workers and api's that update/insert notes.
The original idea was to do this in a migration: gitlab-org/gitlab!61430 (merged), but this lead to a potential issue in that this is a gin index, and creating this on a large table (with some large records in the notes row) was very timely. This meant VACUUMing was blocked, and could have caused the database to shut down due to transaction wrapping: #4633 (closed)
The idea is to create a new identical index, swap the indexes, then drop the old one. This is required to be done in a period of time with less activity, as it may take 1 day + to complete and we need to monitor the performance of this as identified in this production issue (#4633 (closed)).
Change Details
- Services Impacted - ServicePostgres
- Change Technician - DRI for the execution of this change
- Change Criticality - C2
- Change Type - changeunscheduled
- Change Reviewer - @Finotto
- Due Date - Date and time (in UTC) for the execution of the change
- Time tracking - 1140 (19 hours)
- Downtime Component - 0
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 1
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 1020 (17 hours)
-
ssh patroni-v12-02-db-gprd.c.gitlab-production.internal
-
sudo su
-
tmux
-
gitlab-psql
-
sql SET statement_timeout to 0; SET maintenance_work_mem = '10GB' ; REINDEX (VERBOSE) INDEX CONCURRENTLY public.index_notes_on_note_trigram;
-
You can monitor the progress of the indexing by executing the following snippet: https://gitlab.com/-/snippets/2138417
-
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 5
-
Run explain ( analyze, buffers ) select gin_clean_pending_list( 'index_notes_on_note_trigram'::regclass ) ;
. Note the results here
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 120
If the reindex completes there's no rollback. If there's a need to stop the reindexing before completion:
-
Stop the rake
command running in the tmux session -
Trigger VACUUM VERBOSE
on the tables with the largest dead tuple accumulation
Monitoring
Key metrics to observe
- Metric: Dead tuple percentage
- Location: https://dashboards.gitlab.net/d/000000167/postgresql-tuple-statistics?viewPanel=17&orgId=1&refresh=1m&var-env=gprd&var-instance=patroni-v12-02-db-gprd.c.gitlab-production.internal&var-db=gitlabhq_production&var-top_dead_tup=All
- What changes to this metric should prompt a rollback: Over 75%
Summary of infrastructure changes
-
Does this change introduce new compute instances? No -
Does this change re-size any existing compute instances? No -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc? No
Summary of the above
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
There are currently no active incidents.