2023-04-20: [GSTG] Patroni CI Cluster - Rename postgres_exporter schema to postgres_exporter_hidden. This is to help with the upcoming postgres major version upgrade
Stagin Change
Change Summary
Provide a high-level summary of the change and its purpose.
As outlined in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/19280 during the Postgres major upgrade, we identified a schema compatibility issue – views in "postgres_exporter" causing errors like "column reference ..." . We have identified that there are two functions f_select_pg_stat_activity and f_select_pg_stat_replication are still used even after using the fully qualified names for pg_stat_activity, pg_stat_statements, pg_stat_wal_receiver - #8678 (comment 1348798444)
With this change we are trying to rename postgres_exporter schema to postgres_exporter_hidden and fix any errors. Eventually, in a separate CR we will drop postgres_exporter_hidden schema
Change Details
- Services Impacted - List services
- Change Technician - @bshah11
- Change Reviewer - @NikolayS @alexander-sosna @rhenchen.gitlab
- Time tracking - 40 minutes
- Downtime Component - none
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress -
Identify gstg patroni Main cluster leader node and perform change on a leader node. ssh patroni-ci-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl list" -
Dump postgres_exporter schema as backup /usr/lib/postgresql/12/bin/pg_dump gitlabhq_production -h localhost -U gitlab-superuser -s -n postgres_exporter > staging_issue_8755_postgres_exporter_schema.sql -
Monitor postgres log for failed statements sudo tail -f /var/log/gitlab/postgresql/postgresql.csv -
Rename postgres_exporterschema topostgres_exporter_hiddengitlab-psql -c "ALTER SCHEMA postgres_exporter RENAME TO postgres_exporter_hidden;" -
Monitor postgres log for failed statements. tail -f /var/log/gitlab/postgresql/postgresql.csv | egrep -i "postgres_exporter|permission denied" -
Grant select on pg_catalog.pg_stat_replication to postgres_exporter if you find errors in postgres log for pg_stat_replication gitlab-psql -c "grant select on pg_catalog.pg_stat_replication to postgres_exporter;" -
Grant additional select permission on views to pg_read_all_stats if necessary to fix the errors observed in the postgres log gitlab-psql -c "grant select on <Schema Name>.<View Name here> to postgres_exporter;" -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Revoke select on pg_catalog.pg_stat_replication from postgres_exporter gitlab-psql -c "Revoke select on pg_catalog.pg_stat_replication from postgres_exporter;" -
It is not expected to continue to observer any errors for failed statements for the postgres_exorter queries. Rename postgres_exporter_hiddentopostgres_exportergitlab-psql -c "ALTER SCHEMA postgres_exporter_hidden RENAME TO postgres_exporter;" -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric patroni Service Apdex: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&from=now-3h&to=now
- Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.