[GPRD] [2025-10-24 to 2025-10-28] - Upgrade PostgreSQL to PG17 - main database
main
]
Cluster [Change Summary
In 2025 we are upgrading the four Patroni clusters from PostgreSQL 16 to PostgreSQL 17, and from Ubuntu 20.04 to 22.04.
Two are already done: registry
and ci
.
- GPRD 2025-07-25 to 2025-07-29 - CI and Registry
- GSTG 2025-07-18 CI
- GSTG 2025-07-02 to 2025-07-03 Registry
During this change we will
- Upgrade a new standby cluster (running Ubuntu 22.04) to PostgreSQL 17.
- Use logical replication between the live PostgreSQL 16 and new PostgreSQL 17 clusters
- Fix corrupt indexes (caused by the Ubuntu upgrade)
- Update service discovery in Consul to route traffic to the new Patroni cluster running PG 17.
- Establish reverse logical replication to the old PostgreSQL 16 cluster for backout
The process used is largely the same as 2024, with fixes and improvements identified then, and during the work completed so far in 2025:
- [GPRD] [2024-11-01 to 11-05] - Upgrade PostgreSQL to v16 on MAIN cluster
- [GPRD] [2024-10-26 to 10-29] - Upgrade PostgreSQL to v16 on CI cluster
- [GPRD] [2024 September 06 to 11] - Upgrade PostgreSQL to v16 on Registry cluster
Change Details
- Services Impacted - ServicePatroni Database
- Change Technician - @alexander-sosna, @bprescott_
- Change Reviewer - @rhenchen.gitlab, @alexander-sosna
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-10-24 23:00
- Time tracking - 4 days
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
a production change lock is required until the end of the roll back window
PostgreSQL logical replication is used to provide zero downtime
and to support a backout from the new PostgreSQL version to the old version.
Logical replication is incompatible with database migrations (schema/model changes) therefore we require that no deployments occur during the whole period of this change.
Production: we will start work when the weekend soft PCL commences, and extend the block on deployments through to Tuesday morning, UTC.
downtime
The upgrade process is planned to be an online process with zero downtime for end-users, and so will not be communicated in advance.
The automation briefly pauses and queues database traffic during the switchover of write traffic from the old cluster to the new cluster. There is a risk of disruption at this point if manual intervention is required.
Detailed steps for the change
Pre-execution steps
-
Make sure all tasks in Change Technician checklist are done -
For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production
channel, mention@sre-oncall
and this issue and await their acknowledgement.)-
The SRE on-call provided approval with the eoc_approved label on the issue.
-
-
For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In #production
channel, mention@release-managers
and this issue and await their acknowledgment.) -
There are currently no active incidents that are severity1 or severity2 -
If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Change steps - steps to take to execute the change
Estimated Time to Complete (mins) - 5040
detailed implementation plan
We plan that GitLab.com would be unavailable during the execution of the CR and use ops.gitlab.net for the detailed instructions.
The following issue contains the plan:
https://ops.gitlab.net/gitlab-com/gl-infra/db-migration/-/issues/88
high level plan - all timings (UTC) are approximate
-
Friday
2025-10-24 08:00
Pre OS Upgrade Amcheck- On the v17 Writer node, run
amcheck_collatable_parallel
to get a list of corrupted indexes for the reindexing process
- On the v17 Writer node, run
-
Friday
2025-10-24
Release Managers Activity- Run last PDM
- Run last Deployments
-
Friday
2025-10-24 16:00Z
- PCL start:- upgrade PCL starts with the standard weekend PCL
- enable block DDL database migrations Feature Flag
- this might have to be delayed due to
- check there's no running migrations
-
Saturday
2025-10-25 08:00Z
- upgrade- perform database upgrade in the target cluster (convert to logical replication old PG16 -> new PG17)
- start amcheck in the PG17 database (check sample of tables and index consistency), which might take 12+ hours to run
-
Sunday
2025-10-26 08:00Z
- Cutover/Switchover:-
Workload switchover to PG17 (PG17 will become the active database)
- Switchover should be unnoticeable for end-user customers, but there's a small risk of downtime if the automation fails;
- Enable reverse replication (new PG16 -> old PG17)
- Start the rollback window (monitor workload for performance regression in the new engine version)
-
Workload switchover to PG17 (PG17 will become the active database)
-
Tuesday
2025-10-28 08:00Z
- complete change:- End rollback window
- Complete change - shut down old cluster
-
Tuesday
2025-10-28 09:00Z
- PCL finish:- End PCL
- Deploys will resume
-
Tuesday
2025-10-28 10:00Z
- Run the first PDM- To have enough packages in case of problems
- Normal deployment cadence
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 240
This is documented in the implementation plan. Roll back differs for aborting the change:
- Before read traffic is moved to the v17 cluster.
- After read traffic is moved to the v17 cluster, but before writes are moved to the v17 leader.
- After all read and write traffic is live on the v17 cluster.
In the event of an incident:
- The change can be aborted at any time, and rolled back as necessary.
- The default will be to pause this change while the incident is resolved, and then continue.
- If DDL (a database migration or schema change) is required to resolve the incident, we will have to abort this change, owing to the use of logical replication. Otherwise, there's no impact in the upgrade process.
Monitoring
Key metrics to observe
This is documented in the implementation plan.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
The change plan is technically accurate. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
The change execution window respects the Production Change Lock periods. -
For C1 and C2 change issues, the change event is added to the GitLab Production calendar. -
For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention @gitlab-org/saas-platforms/inframanagers
in this issue to request approval and provide visibility to all infrastructure managers. -
For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not overlap or hinder any release process (In #production
channel, mention@release-managers
and this issue and await their acknowledgment.)