2023-05-23: DEV pgupgrade to update database to PG13
Production Change
2023-10-15 01:00 UTC - 07:00 UTC
Change Summary
Running gitlab-ctl pgupgrade on dev.gitlab.org insance to update the database to PG13
Change Details
-
Services Impacted -
dev.gitlab.orgwill be down during the upgrade - Change Technician - @f_santos
- Change Reviewer - DRI for the review of this change
- Time tracking - 360min (including rollback)
- Downtime Component - 240min (including rollback)
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 240min
-
Set label changein-progress /label ~change::in-progress -
Confirm previous package upgrade backup
Check that the latest dev backups have been tested. There should be a passing pipeline for dev https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/-/pipeline_schedules
-
Disable the chef-client so no runs interfere with the upgrade
sudo systemctl stop chef-client
-
Disable the walg database backup cron
Remove the walg line from the crontab
sudo -u gitlab-psql crontab -e
-
Make the postgreql archive command a no-op
Edit /etc/gitlab/gitlab.rb and prefix the existing archive command config with echo
postgresql['archive_command'] = "echo /usr/bin/envdir /etc/..."
-
Perform the upgrade
sudo gitlab-ctl pg-upgrade --timeout=3h
Based on recent pgdump times on this instance of around 40min this will likely take at least an hour.
In the last 100lines of output, upon data upgrade success you should see Database upgrade is complete, running vacuumdb analyze and then ==== Upgrade has completed ==== is printed after the vacuum is complete. (vacuum could take a significant amount of time).
Do not cleanup the /var/opt/gitlab/postgresql/data.12 and /var/opt/gitlab/postgresql-version.old at this time, even if the output indicates you can after verify. These are needed for rollback to work.
-
Check vacuum errors
Errors related to vacuumdb should not be fatal, the upgrade should still complete. If a vacuumdb error occurs, log a followup change issue to run vacuumdb manually to cleanup the database. (skip and come back to this after the other steps)
-
Verify pg binary and data versions are 13
/opt/gitlab/embedded/bin/psql --version
cat /var/opt/gitlab/postgresql/data/PG_VERSION
-
Verify dev.gitlab.org is up
Check that is can be logged into and used. Check the postgres version from the admin dashboard. https://dev.gitlab.org/admin
-
Enable walg database backup
Merge chef-repo MR that updates the walg storage location for pg13: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/4006
-
Re-enable the chef client
sudo systemctl start chef-client
-
Run chef-client to ensure it works, and that our walg changes are re-enabled
sudo chef-client
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 120min
If the pg-upgrade script encounters a fatal error it will autorollback
-
If the auto-rollback worked, jump to the verify steps
If the pg-upgrade script completed successfully, but we determine we need to rollback
-
Ensure chef-clientis stopped via the early steps in the change steps -
Ensure walg is stopped via the early steps in the changes steps -
Run the revert script
sudo gitlab-ctl revert-pg-upgrade
(should be very quick, only take a minute)
-
jump to the verify steps
Other errors that require database restore
-
In the case of other errors, we may need to restore from the walg backup, which may take more time.
Verify a rollback
-
Verify pg binary and data versions are 12
/opt/gitlab/embedded/bin/psql --version
cat /var/opt/gitlab/postgresql/data/PG_VERSION
-
Verify dev.gitlab.org is up
Check that is can be logged into and used. Check the postgres version from the admin dashboard. https://dev.gitlab.org/admin
-
Re-enable the chef client -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Metric Name
- Location: Dashboard URL
- What changes to this metric should prompt a rollback: Describe Changes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.