stgsub and stgsub-ref: Upgrade CloudSQL Instance Customers-DB from Postgres 12 to Postgres 16
Production Change
Change Summary
Upgrade CloudSQL Instance for Customers-DB from Postgres 12 to Postgres 16 in gitlab-subscriptions-staging and gitlab-subscriptions-stg-ref
This is a precursor to Upgrade CloudSQL Instance for Customers-DB from Postgres 12 to Postgres 16 in gitlab-subscriptions-prod on UTC 2025-06-08 05:00:00
Change Details
- Services Impacted - Service::CustomersDot
- Change Technician -@zbraddock @vitallium
- Change Reviewer - @jjsisson @rhenchen.gitlab
- Scheduled Date and Time - UTC 2025-05-29 05:00:00
- Time tracking - 60 minutes
- Downtime Component - 20 minutes of downtime for stgsub-ref and stgsub. However, we have already confirmed with the Fulfilment team on multiple occasions that we can have downtime in stgsub-ref and stgsub without negative effects. If we need to rollback, this will add another 25min of downtime.
The reason we want to take downtime is because our zero-downtime process will not work for CloudSQL, because CloudSQL does not support pgbouncers.
External Communication
Not relevant for staging and stg-ref
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60 minutes
-
Set label changein-progress /label ~change::in-progress -
Create ~production::blockerand~provision::blockerissues in CustomersDot and Ansible projects to block deployments. -
Create a new CloudSQL Instance at version Postgres 12 for each environment so it is available for rollback if needed. Takes approx 15min to spinup an empty CloudSql instance
-
Merge https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/10931. Atlantis will timeout, but that is ok because the CloudSQL instances should still be created. This MR will create the rollback instance for production as well - that is ok, I'd rather have this created well in advance of 8th June anyway.
gitlab-subscriptions-stg-ref
-
Run customers:maintenance_mode[on]Rake task to enable the maintenance mode. -
Upgrade from 12 to 16 w Terraform. Takes up to 20m, atlantis apply may time out but operation should still succeed -
Run ANALYZEon CloudSQL instance to update statistics.-
Connect. The password is in the Production Vault of 1Password search 'dbo_team' gcloud sql connect customers-db-04f6 --user=dbo_team --database=CustomersDot_stg-ref -
Run Analyze. Takes about 2min. You will get permission denied to analyze on some system tables that CloudSQL does not allow end users to touch - this is ok. analyze; -
Check it ran correctly - we expect to see last_analyze = the timestamp of whenever we ran analyze SELECT schemaname, relname AS table_name, last_analyze, last_autoanalyze FROM pg_stat_all_tables WHERE last_analyze IS NOT NULL OR last_autoanalyze IS NOT NULL ORDER BY greatest(COALESCE(last_analyze, '1970-01-01'), COALESCE(last_autoanalyze, '1970-01-01')) DESC;
-
-
Last chance to rollback this environment Run the following commands to ensure everything works correctly: -
Run Rails validations on a model: puts Customer.count; puts Customer.last.valid?. Expected output: Non-zero count and "true" -
Test ActiveRecord connection pool: ActiveRecord::Base.connection_pool.statHash with connections showing normal pool usage, e.g.,{:size=>5, :connections=>1}
-
-
Run customers:maintenance_mode[off]Rake task to disable the maintenance mode.
gitlab-subscriptions-staging
-
Run customers:maintenance_mode[on]Rake task to enable the maintenance mode. -
Upgrade from 12 to 16 w Terraform. Takes up to 20m, atlantis apply may time out but operation should still succeed -
Run ANALYZEon CloudSQL instance to update statistics.-
Connect. The password is in the Production Vault of 1Password search 'dbo_team' gcloud sql connect customers-db-4dc5 --user=dbo_team --database=CustomersDot_stg -
Run Analyze. Takes about 2min. You will get permission denied to analyze on some system tables that CloudSQL does not allow end users to touch - this is ok. analyze; -
Check it ran correctly - we expect to see last_analyze = the timestamp of whenever we ran analyze SELECT schemaname, relname AS table_name, last_analyze, last_autoanalyze FROM pg_stat_all_tables WHERE last_analyze IS NOT NULL OR last_autoanalyze IS NOT NULL ORDER BY greatest(COALESCE(last_analyze, '1970-01-01'), COALESCE(last_autoanalyze, '1970-01-01')) DESC;
-
-
Last chance to rollback this environment Run the following commands to ensure everything works correctly: -
Run Rails validations on a model: puts Customer.count; puts Customer.last.valid?. Expected output: Non-zero count and "true" -
Test ActiveRecord connection pool: ActiveRecord::Base.connection_pool.statHash with connections showing normal pool usage, e.g.,{:size=>5, :connections=>1}
-
-
Run customers:maintenance_mode[off]Rake task to disable the maintenance mode.
Cleanup
-
Destroy the rollback CloudSQL Instance at version Postgres 12 for each environment, as we can no longer rollback.
-
Set label changecomplete /label ~change::complete -
Close ~production::blockerand~provision::blockerissues in CustomersDot and Ansible projects to unblock deployments.
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 25min
Once we have run Run customers:maintenance_mode[off] Rake task to disable the maintenance mode we can no longer rollback that environment. This was extensively discussed and the risk agreed to here: gitlab-com/gl-infra/data-access/dbo/dbo-issue-tracker#343 (comment 2431510003)
-
Restore your pre-upgrade backup to the new instance. Should take around 12min. -
Go to CloudSQL backups on each instance you want to rollback -
Find the backup marked Pre-upgrade backup, POSTGRES_12 to POSTGRES_16. -
Click ‘Restore’. Select ‘Overwrite existing instance’ and select the seperate rollback instance you created earlier -
Click ’Restore’
-
-
Point the application to the new instance: update Rails credentials for each env with new credentials to a new PSQL instance. Open an MR and merge it. -
Run the following commands to ensure everything works correctly: -
Run Rails validations on a model: puts Customer.count; puts Customer.last.valid?. Expected output: Non-zero count and "true" -
Test ActiveRecord connection pool: ActiveRecord::Base.connection_pool.statHash with connections showing normal pool usage, e.g.,{:size=>5, :connections=>1}
-
-
Run customers:maintenance_mode[off]Rake task to disable the maintenance mode. -
Close ~production::blockerand~provision::blockerissues in CustomersDot and Ansible projects to unblock deployments. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Puma
- Location: https://dashboards.gitlab.net/d/customersdot-main/customersdot-overview?orgId=1
- What changes to this metric should prompt a rollback: Puma reports an unrecoverable error or a Ruby error.
- Metric: Sidekiq
- Location: https://dashboards.gitlab.net/d/customersdot-main/customersdot-overview?orgId=1
- What changes to this metric should prompt a rollback: Sidekiq reports an unrecoverable error or a Ruby error.
- Metric: Health check
- Location: https://customersdot.cloudwatch.net/dashboard
- What changes to this metric should prompt a rollback: If both Puma and Sidekiq are running but we still receive health error.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention
@gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers. - Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.