Draft: [GPRD] Reduce CI database disk usage via pg_repack on group_type_ci_runner_machines
Production Change
Change Summary
pg_repack the group_type_ci_runner_machines table to mitigate high table bloat.
It's a small table (8 GiB) with high table bloat (97%).
Details are here:
gitlab-com/gl-infra/data-access/dbo/dbo-issue-tracker#582 (closed)
Change execution in GSTG: TBD
Change Details
- Services Impacted - ServicePostgres
- Change Technician - TBD
- Change Reviewer - DBRE
- Time tracking - < 1 Hour
- Downtime Component - NO
Important
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Preparation
Note
The following checklists must be done in advance, before setting the label changescheduled
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
The Change Criticality has been set appropriately and requirements have been reviewed. -
The change plan is technically accurate. -
The rollback plan is technically accurate and detailed enough to be executed by anyone with access. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
The change execution window respects the Production Change Lock periods. -
For C1 and C2 change issues, the change event is added to the GitLab Production calendar. -
For C1 change issues, a Senior Infrastructure Manager has provided approval with the manager_approved label on the issue. -
For C2 change issues, an Infrastructure Manager provided approval with the manager_approved label on the issue. -
Mention @gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers. -
For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not overlap or hinder any release process (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.) -
Once all checkboxes are done, mark the change request as scheduled: /label ~"change::scheduled"
Detailed steps for the change
Pre-execution steps
Note
The following steps should be done right at the scheduled time of the change request. The preparation steps are listed below.
-
Make sure all tasks in Change Technician checklist are done -
For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (Search the PagerDuty schedule for "SRE 8-hour" to find who will be on-call at the scheduled day and time. SREs on-call must be informed of plannable C1 changes at least 2 weeks in advance.) -
The SRE on-call provided approval with the eoc_approved label on the issue.
-
-
For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents that are severity1 or severity2 -
If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Change steps - steps to take to execute the change
All the steps should be made from a console node in gprd.
T minus 1 day
-
Install dependencies, postgressql-17-repack, libpq and Ruby 3.1.2 (using RVM) in the console node: sudo apt install gnupg2 sudo apt install postgresql-17-repack sudo apt install libpq-dev gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB \curl -sSL https://get.rvm.io | bash -s stable source ~/.rvm/scripts/rvm rvm install 3.1.2 -
Install https://gitlab.com/gitlab-com/gl-infra/gitlab-pgrepack in the console node cd $HOME git clone https://gitlab.com/gitlab-com/gl-infra/gitlab-pgrepack.git cd gitlab-pgrepack rvm use 3.1.2 gem install bundler bundle install -
Edit the config/gitlab-repack.ymlconf file with the following settings, don't forget to use the Primary/Writer node for the target cluster inhost/-h, and the properpassword/PGPASSWORDin bothdatabaseandrepack.commandsections:general: env: local database: adapter: postgresql host: TBD user: gitlab-superuser password: TBD database: gitlabhq_production estimate: ratio_threshold: 50 # bloat ratio threshold in % (set to 0 for testing) real_size_threshold: 10000000 # real size of object in bytes (set to 0 for testing) objects_per_repack: 1 repack: command: PGPASSWORD=TBD pg_repack -h TBD -p 5432 -U gitlab-superuser -d gitlabhq_production --no-kill-backend # Optional: Grafana annotations grafana: auth_key: false # put API key here to enable base_url: https://dashboards.gitlab.net
Before executing pg_repack
TODO: Change bd2Kl9Imk in the dashboard urls to the actual node id.
Check the following metrics and don't execute pg_repack if any of these conditions are met:
- Metric: Leader nodes CPU Load (processes per core)
- Location: node_load
- What changes to this metric should prompt a rollback:
CPU Load Avg > 0.5(per core) for 15 minutes or more;
- Metric: Leader nodes CPU Usage (% of all CPUs)
- Location: node_cpu_utilization
- What changes to this metric should prompt a rollback: avg
CPU utilization > 50%for 15 minutes or more;
- Metric: Leader nodes I/O Throughput in MiB/s
- Location: /dev/sdb node_disk_read_bytes_total, /dev/sdb node_disk_written_bytes_total
- What changes to this metric should prompt a rollback: `I/O Throughput > 50% of the limit, for 15 minutes or more;
- Metric: Leader nodes IOPS
- Location: /dev/sdb node_disk_reads_completed_total , /dev/sdb node_disk_writes_completed_total
- What changes to this metric should prompt a rollback: I/O operations per second `IOPS > 50% of the limit, for 15 minutes or more;
- Metric: Leader nodes read and write latency
- Location: [/dev/sdb Node read and write latency] (https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats)
- What changes to this metric should prompt a rollback:latency > 1 ms except occasional spikes.
- Metric: Leader nodes wait events
- Location: postgres_node_performance_overview
- What changes to this metric should prompt a rollback: ASH graph shows a large (more that 5% of vCPU count) number of active sessions waiting on IO;
Execute pg_repack
Estimated Time to Complete (mins) - <1 Hour
-
Set label changein-progress /label ~change::in-progress -
Notify @sre-oncall(TBD) and@release-managers -
Connect into any database instance in the target cluster (preferably the backup node) and execute a query to gather the before bloat stats, for example the following: SELECT pg_size_pretty(pg_table_size(c.oid)) AS table_size, pg_size_pretty(pg_indexes_size(c.oid)) AS index_size, pg_size_pretty(pg_total_relation_size(c.reltoastrelid)) AS toast_size, pg_size_pretty( pg_table_size(c.oid) + pg_indexes_size(c.oid) + pg_total_relation_size(c.reltoastrelid) ) AS total_size FROM pg_class c WHERE c.oid = 'gitlab_partitions_dynamic.ci_builds'::regclass; -
Connect into the consolenode that you have installedgitlab-pgrepack -
In a tmux session in the consolenode, execute the repack with:cd ~/gitlab-pgrepack source ~/.rvm/scripts/rvm rvm use 3.1.2 ./bin/gitlab-pgrepack repack --type=tables --objects=public.group_type_ci_runner_machines -
While Repack is running monitor CPU usage and load, memory swapping and I/O statistics in the Writer node: -
Connect into any database instance in the target cluster (preferably the backup node) and execute a query to gather the after bloat stats, for example the following: SELECT pg_size_pretty(pg_table_size(c.oid)) AS table_size, pg_size_pretty(pg_indexes_size(c.oid)) AS index_size, pg_size_pretty(pg_total_relation_size(c.reltoastrelid)) AS toast_size, pg_size_pretty( pg_table_size(c.oid) + pg_indexes_size(c.oid) + pg_total_relation_size(c.reltoastrelid) ) AS total_size FROM pg_class c WHERE c.oid = 'gitlab_partitions_dynamic.ci_builds'::regclass; -
Set label changecomplete /label ~change::complete -
Delete the file gitlab-pgrepack/config/gitlab-repack.yml, or cleanup it's content, to avoid exposing any access credentials;
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 15 minutes
-
Kill/abort the gitlab-pgrepackprocess- Check for active processes with https://gitlab.com/rhenchen.gitlab/rhenchen/-/blob/main/postgresql/USEFUL_QUERIES.md#22-active-processes
- Use
select pg_terminate_backend(pid int)to terminate thepg_repackparent process; See more at: https://www.postgresql.org/docs/17/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL
-
Log into the Writer patroni node -
Cleanup pgrepacktemporary objects with the following SQL statements:DROP EXTENSION pg_repack CASCADE; CREATE EXTENSION pg_repack; -
Set label changeaborted /label ~change::aborted
Note on how to clean up pg_repack at https://reorg.github.io/pg_repack/#diagnostics
You need to cleanup by hand after fatal errors. To cleanup, just remove pg_repack from the database and install it again: execute DROP EXTENSION pg_repack CASCADE in the database where the error occurred, followed by CREATE EXTENSION pg_repack;
Monitoring
Key metrics to observe
Rollback Thresholds
TODO: Change bd2Kl9Imk in the dashboard urls to the actual node id.
- Metric: Leader nodes CPU Load (processes per core)
- Location: node_load
- What changes to this metric should prompt a rollback:
CPU Load Avg > 0.7(per core) for 15 minutes or more;
- Metric: Leader nodes CPU Usage (% of all CPUs)
- Location: node_cpu_utilization
- What changes to this metric should prompt a rollback: avg
CPU utilization > 70%for 15 minutes or more;
- Metric: Leader nodes I/O Throughput in MiB/s
- Location: /dev/sdb node_disk_read_bytes_total, /dev/sdb node_disk_written_bytes_total
- What changes to this metric should prompt a rollback: `I/O Throughput > 70% of the limit, for 15 minutes or more;
- Metric: Leader nodes IOPS
- Location: /dev/sdb node_disk_reads_completed_total , /dev/sdb node_disk_writes_completed_total
- What changes to this metric should prompt a rollback: I/O operations per second `IOPS > 70% of the limit, for 15 minutes or more;
- Metric: Leader nodes read and write latency
- Location: [/dev/sdb Node read and write latency] (https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats)
- What changes to this metric should prompt a rollback:latency > 1 ms except occasional spikes.
- Metric: Leader nodes wait events
- Location: postgres_node_performance_overview
- What changes to this metric should prompt a rollback: ASH graph shows a large (more that 5% of vCPU count) number of active sessions waiting on IO, Lock, LWLock.