[GPRD] Manually upgrade postgresql extension pg_stat_kcache to version 2.3.1 on patroni-ci-v17 cluster
Production Change
Change Summary
The current version of the postgresql extension pg_stat_kcache is 2.3.0 we have a latest version 2.3.1 available on the cluster, we would like to upgrade the extension on the following hosts on the patroni-ci cluster
|
Hostname |
Current Version |
Latest Version |
Status |
| patroni-ci-v17-101-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-102-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-103-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-104-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-105-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-106-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-107-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
| patroni-ci-v17-108-db-gprd.c.gitlab-production.internal | 2.3.0 | 2.3.1 |
|
This change was not implemented on GSTG as the versions on the stage are upto date and does not need any upgardes.
The validation test results are stated here #20795 (comment 2855125339)
Change Details
- Services Impacted - ServicePatroniCI
- Change Technician - @vporalla
- Change Reviewer -
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-11-12 09:00
- Time tracking - 60m
- Downtime Component - None
Important
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Preparation
Note
The following checklists must be done in advance, before setting the label changescheduled
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
The Change Criticality has been set appropriately and requirements have been reviewed. -
The change plan is technically accurate. -
The rollback plan is technically accurate and detailed enough to be executed by anyone with access. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
The change execution window respects the Production Change Lock periods. -
Once all boxes above are checked, mark the change request as scheduled: /label ~"change::scheduled" -
For C1 and C2 change issues, the change event is added to the GitLab Production calendar by the change-scheduler bot. It is schedule to run every 2 hours. -
For C1 change issues, a Senior Infrastructure Manager has provided approval with the manager_approved label on the issue. -
For C2 change issues, an Infrastructure Manager provided approval with the manager_approved label on the issue. -
For C1 and C2 changes, mention @gitlab-org/saas-platforms/inframanagersin this issue to provide visibility to all infrastructure managers. -
For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not overlap or hinder any release process (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.)
Detailed steps for the change
Pre-execution steps
Note
The following steps should be done right at the scheduled time of the change request. The preparation steps are listed below.
-
Make sure all tasks in Change Technician checklist are done -
For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (Search the PagerDuty schedule for "SRE 8-hour" to find who will be on-call at the scheduled day and time. SREs on-call must be informed of plannable C1 changes at least 2 weeks in advance.) -
The SRE on-call provided approval with the eoc_approved label on the issue.
-
-
For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents that are severity1 or severity2 -
If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Change steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2880 mins
Day-1 - Work on one node patroni-ci-v17-105-db-gprd.c.gitlab-production.internal
-
Set label changein-progress /label ~change::in-progress -
Verify state of cluster (run on one of the patroni-ciservers)- Check for anomalies
- Post output in CR comment to later compare if the change caused replication lag.
ssh patroni-ci-v17-101-db-gprd.c.gitlab-production.internal "sudo gitlab-patronictl list" -
Login to the node and manually upgrade the extension ssh patroni-ci-v17-105-db-gprd.c.gitlab-production.internal sudo su - /var/opt/gitlab/patroni/scripts/pg-ext-manager.sh update -
Monitor the node for any abnomalities -
Set label /label ~change::scheduled
DAY-2 - After 24 hours of monitoring upgarde the remaining nodes of the cluster
-
Set label changein-progress /label ~change::in-progress -
For each of the Hosts mentioned below run the upgarde process individually Hostname patroni-ci-v17-101-db-gprd.c.gitlab-production.internal patroni-ci-v17-102-db-gprd.c.gitlab-production.internal patroni-ci-v17-103-db-gprd.c.gitlab-production.internal patroni-ci-v17-104-db-gprd.c.gitlab-production.internal patroni-ci-v17-106-db-gprd.c.gitlab-production.internal patroni-ci-v17-107-db-gprd.c.gitlab-production.internal patroni-ci-v17-108-db-gprd.c.gitlab-production.internal -
Verify state of cluster (run on one of the patroni-ciservers)- Check for anomalies
- Post output in CR comment to later compare if the change caused replication lag.
ssh patroni-ci-v17-101-db-gprd.c.gitlab-production.internal "sudo gitlab-patronictl list" -
Upgrade the postgresql extension on all the nodes of this cluster one at a time with 5 mins delay, During the interm we can monitor the node status and replica lag
ssh <hostname> sudo su - /var/opt/gitlab/patroni/scripts/pg-ext-manager.sh update -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Login to the host and drop and recreate older extension ssh <hostname> sudo gitlab-psql DROP EXTENSION pg_stat_kcache; CREATE EXTENSION pg_stat_kcache VERSION '2.3.0'; -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric:
pgbouncer RPS - per fqdn- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview
- this will be used to monitor that node 101 is not getting requests - it should be the same as the backup node, 104
- also: under
SLI Detail: transactions_replica, thetransactions_replica RPS - per fqdnpanel
- Metric: section
SLI Detail: transactions_primary- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview
- the failover should be observable here.
- Metric: section
node metricspanelnode cpu- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview
- node 101 cpu use should drop when it's taken out of service.
- the new leader should show reduced cpu when the switchover occrs
- node 101 cpu will climb and sync with the other replicas when it's back in service
- Metric: section
pgbouncer workload,pgbouncer connection pooling- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview
- panels will be checked for unexpected behaviour such as
Average Wait Time per SQL Transactionspiking - active connections by Patroni node are expected to change through the process.