2022-10-01: GPRD Truncate the rest of CI tables on Main and Main Tables on CI to free storage
Production Change
Change Summary
This change is for PRODUCTION. You can find the equivalent one we did in staging at #7585 (closed).
Since we finished CI decomposition in https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7111 we now have a bunch of big unused tables sitting around in the Main and CI databases that belong in the other databases. Nothing should be reading or writing tables that belong in the other database. But due to https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/81 and probably other reasons we want to free up this storage.
As part of #7511 (closed), we have already truncated
two tables. ci_builds_metadata
on the Main database, and merge_request_diff_commits
.
For this change request, we want to run the rake tasks, which were introduced in gitlab-org/gitlab!94252 (merged)
gitlab:db:truncate_legacy_tables:main
gitlab:db:truncate_legacy_tables:ci
The following may be useful to diagnose if things are taking too long to truncate a specific table:
- Check if locks exist for table:
select pid from pg_locks l join pg_class t on l.relation = t.oid where t.relkind = 'r' and t.relname = '<TABLE_NAME>';
- Find queries for locks associated with a table:
select pid, state, usename, query, query_start from pg_stat_activity where pid in (select pid from pg_locks l join pg_class t on l.relation = t.oid and t.relkind = 'r' where t.relname = 'ci_builds_metadata');
- Cancel a pid:
SELECT pg_cancel_backend(<PID>);
- Terminate a pid:
SELECT pg_terminate_backend(<PID>);
- Find the longest running queries:
select pid, state, usename, query, query_start from pg_stat_activity order by query_start asc limit 10;
Change Details
- Services Impacted - ServicePostgres
-
Change Technician -
@OmarQunsulGitlab
- Change Reviewer - @rhenchen.gitlab
- Time tracking - min of 1.5 hours (depending on number of data files in production)
- Downtime Component - none
Detailed steps for the change
Preparing: Checking any locks on any tables that need to be truncated
- On the Main Database, check the locks on the CI tables
SELECT pid, relname from pg_locks l join pg_class t ON l.relation = t.oid where t.relkind = 'r' AND (starts_with(t.relname, 'ci_') OR t.relname IN ('external_pull_requests', 'taggings', 'tags'));
- On the CI Database, check the locks on any non-CI tables. Might contain Shared Tables:
SELECT pid, relname from pg_locks l join pg_class t ON l.relation = t.oid where t.relkind = 'r' AND NOT (starts_with(t.relname, 'ci_') OR t.relname IN ('external_pull_requests', 'taggings', 'tags'));
- If needed, this is the query to list all the locks
SELECT pid, relname from pg_locks l join pg_class t ON l.relation = t.oid where t.relkind = 'r';
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2 hours
-
Running during low utilization hours due to large amounts of IO from truncate. -
Set label changein-progress /label ~change::in-progress
. -
Trigger disk snapshots of the CI and Main backup nodes for rollback purposes; -
SSH into each of the Backup nodes for Main and CI Patroni clusters: -
patroni-main-2004-10-db-gprd.c.gitlab-production.internal
-
patroni-ci-2004-03-db-gprd.c.gitlab-production.internal
-
-
Run gcs-snapshot.sh
to create a disk snapshot on each backup node:sudo su gitlab-psql -c /usr/local/bin/gcs-snapshot.sh
-
Once the command has finished running, scroll back up and you'll see Creating snapshot(s) <name>...
in the output. Take note of the name. -
Confirm the snapshots have been created by running the following from your machine: NOTE: this step assumes you have your
gcloud
authenticated with access to GitLab projects:gcloud compute snapshots list --project gitlab-staging-1 --limit=1 --sort-by=~creationTimestamp --filter="sourceDisk~patroni-main-2004-10-db-gprd AND description~gcs-snapshot" --format="table[box,margin=3,title='Most recent snapshot'](name,creationTimestamp,sourceDisk,status)" gcloud compute snapshots list --project gitlab-staging-1 --limit=1 --sort-by=~creationTimestamp --filter="sourceDisk~patroni-ci-2004-03-db-gprd AND description~gcs-snapshot" --format="table[box,margin=3,title='Most recent snapshot'](name,creationTimestamp,sourceDisk,status)"
-
-
Validate there are definitely no queries for on the gitlab_ci
tables in the Main database-
Sequential Tuple Reads should be 0 -
Index tuple reads should be 0
-
-
Validate there are definitely no queries for gitlab_main
tables in the CI database. The queries are grouped byrelname
otherwise the results list is big to show.-
Sequential Tuple Reads should be 0 -
Index tuple reads should be 0
-
-
Connect to a virtual machine where we can run rake tasks. Maybe in a different tab? -
Lock the writes on all the newly added tables: -
sudo DRY_RUN=true gitlab-rake gitlab:db:lock_writes
, to print the tables that will be locked. -
sudo gitlab-rake gitlab:db:lock_writes
to lock the tables.
-
- We can introduce some time here (X Minutes) to monitor and see that there are no errors from locked tables.
-
Connect to the Main Patroni database server -
Note disk usage from df -h /var/opt/gitlab
:Used ?
-
Truncating the (1/2) batches of the gitlab_ci
on the Main database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true UNTIL_TABLE=ci_namespace_mirrors gitlab-rake gitlab:db:truncate_legacy_tables:main
sudo UNTIL_TABLE=ci_namespace_mirrors gitlab-rake gitlab:db:truncate_legacy_tables:main
-
Note disk usage from df -h /var/opt/gitlab
on the Main Patroni database server (should have gone down):Used ?
=> reduced by ? -
Truncating the (2/2) batches of the gitlab_ci
tables on the Main database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true gitlab-rake gitlab:db:truncate_legacy_tables:main
sudo gitlab-rake gitlab:db:truncate_legacy_tables:main
-
Note disk usage from df -h /var/opt/gitlab
on the Main Patroni database server (should have gone down):Used ?
=> reduced by ? -
Close the Main primary database session. -
Connect to the CI Patroni database server -
Note disk usage from df -h /var/opt/gitlab
:Used ?
-
Switch back to the virtual machine where we run the rake tasks to truncate the tables. -
Truncating the (1/4) batches of the gitlab_main
tables on the CI database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true UNTIL_TABLE=vulnerability_finding_evidences gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
sudo UNTIL_TABLE=vulnerability_finding_evidences gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
-
Truncating the (2/4) batches of the gitlab_main
tables on the CI database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true UNTIL_TABLE=deployment_clusters gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
sudo UNTIL_TABLE=deployment_clusters gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
-
Truncating the (3/4) batches of the gitlab_main
tables on the CI database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true UNTIL_TABLE=project_alerting_settings gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
sudo UNTIL_TABLE=project_alerting_settings gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
-
Disable the triggers on security_findings
:sudo gitlab-psql -c "ALTER TABLE security_findings DISABLE TRIGGER gitlab_schema_write_trigger_for_security_findings;"
-
Truncating the (4/4) batches of the gitlab_main
tables on the CI database. First command to see which tables are truncated in DRY_MODE:sudo DRY_RUN=true gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
sudo gitlab-rake gitlab:db:truncate_legacy_tables:ci[10]
-
Re-enable the triggers on security_findings
again:sudo gitlab-psql -c "ALTER TABLE security_findings ENABLE TRIGGER gitlab_schema_write_trigger_for_security_findings;"
-
Make sure that the trigger is re-enabled sudo gitlab-psql -c "TRUNCATE security_findings;"
- This command should return error
ERROR: Table: "security_findings" is write protected within this Gitlab database.
- This command should return error
-
Note disk usage from df -h /var/opt/gitlab
:Used ?
-
Close the CI primary database session -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete - time depends on size of tables to be restored (can take up to several days for very large tables)
-
This data was unused and stale so if there is an issue with some service that was reading this data then it's probably best to roll forward and fix whatever just broke -
If there is absolutely no way to roll forward and you really need to bring back the data that was truncated you can recover the table from: - For Small Tables recover the data using Foreign Data Wrappers already configured to access the schemas in
dr-delayed
database (delay is of 8 hours):- Main:
insert into <table_name> select * from gitlab_main_dr_delayed.<table_name>;
- CI:
insert into <table_name> select * from gitlab_ci_dr_delayed.<table_name>;
- Main:
- For Large Tables recover the data using COPY from a snapshot restored node, for example:
- Create a new Patroni Cluster, from an old snapshot of the source cluster. (You can use a process similar to the one used to create the CI cluster);
- Export the table using COPY:
COPY (select * from <table_name>) TO '/var/opt/gitlab/dump/<table_name>.dat' WITH (FORMAT binary);
- Transfer the
.dat
file from the cloned cluster into the target cluster writer node; - Import the table using COPY:
ALTER TABLE <table_name> DISABLE TRIGGER ALL; COPY <table_name>(<columns>) FROM '/var/opt/gitlab/dump/<table_name>.dat' WITH (FORMAT binary);
- For Small Tables recover the data using Foreign Data Wrappers already configured to access the schemas in
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Postgresql Overview
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1 Hide charts
- What changes to this metric should prompt a rollback: Sudden saturation of any metrics
- Metric: Sentry
- Location: [Sentry](https://gitlab.sentry.net]
- What changes to this metric should prompt a rollback: New errors that are caused by this missing data
- Metric: Patroni Overview
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1 Hide charts
- What changes to this metric should prompt a rollback: Large increase in errors
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.