2022-04-04 GSTG - Lock Remaining Unlocked Tables on CI and Main databases

Production Change

Change Summary

This change is for STAGING.

Since we decomposed the databases, we need to prevent that tables that belong to another database, are modified. For example, we do not want to add data to ci_pipelines table in the main database or update a users record in ci database. In order to prevent this, we have so called write locks in place. That is a before insert/update/delete/truncate trigger that will prevent such operations on a table.

These triggers have been added manually using the gitlab:database:lock_writes rake task. And there is also automation in place for new tables.

However, between the last manual run of the rake task and the automation for new tables, some new tables also got created. And these tables fell between the cracks: they do not have correct write locks in place. So we want to run the lock_writes rake task another time, to ensure all the tables now have correct write locks in place.

Related issue: gitlab-org/gitlab#384852 (closed)

Impact

The script will result in this:

  • Adding locks to tables that should have them
  • Any tables that should not be locked: the lock will be removed

"Locking a table" means: Create a 'before update/insert/delete/truncate' trigger that checks if the table has been marked as 'locked'. "Unlocking a table" means: drop that trigger

In order to assess impact, we need to know how many tables will be updated

Change Details

  1. Services Impacted - ServicePostgres
  2. Change Technician - DBRE: @rhenchen.gitlab BE: @rutgerwessels
  3. Change Reviewer - @rhenchen.gitlab
  4. Time tracking - 15 mins
  5. Downtime Component - none

Detailed steps for the change

BE engineer: Determine number of tables that are about to be changed

This can be done a few hours/days ahead of the actual change

In a rails console, run this code snippet:

# Dry run: this is safe to run in any environment
results = Gitlab::Database::TablesLocker.new(dry_run: true).lock_writes

summary = {
  tables_locked: {},
  nr_locked: {},
  nr_skipped: {},
  nr_unlocked: {}
}

results.each do |result|
  total_key = "nr_#{result[:action]}".to_sym
  summary[total_key][result[:database]] ||= 0
  summary[:tables_locked][result[:database]] ||= []

  summary[:tables_locked][result[:database]] << result[:table] if result[:action] == 'locked'
  summary[total_key][result[:database]] += 1
end

pp summary

to-be-locked-tables-staging.txt

 :nr_locked=>{"main"=>5, "ci"=>383},
 :nr_skipped=>{"main"=>58, "ci"=>539},
 :nr_unlocked=>{"main"=>954, "ci"=>95}}

Verify the output:

  • List of tables for Main: should list only tables that are unlocked in CI database
  • List of tables for CI: should list only tables that are unlocked in main database
  • The counters (nr of skipped/locked/unlocked tables) should match expectations

BE engineer will, based on the output of the dry-run, decide it is safe from application perspective, to enable the writes lock to the remaining tables

DBRE Engineer: Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 10 minutes

  1. Set label changein-progress /label ~change::in-progress.
  2. Connect to a virtual machine where we can run rake tasks.
  3. Lock the writes on all the newly added tables (1 minute):
    1. DRY_RUN=true VERBOSE=true ./bin/rake gitlab:db:lock_writes, to print the tables that will be locked
    2. VERBOSE=true ./bin/rake gitlab:db:lock_writes to lock the tables.
  4. Ensure there are no new exceptions: (9 minutes)
    1. Monitor Sentry for any noticeable new errors that are related to database.
    2. Monitor PostgreSQL logs. By searching for gitlab_schema_prevent_write. Link
  5. Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete - 5 minutes

  • If the monitoring (Sentry, Kibana) show that one table receives writes we can remove the lock for that one table:
    • With help from BE engineer: Open a rails console
    • Run this:
     table_name = 'ci_pipelines'
     Gitlab::Database.database_base_models_with_gitlab_shared.each{ |database_name, model| Gitlab::Database::LockWritesManager.new(table_name: table_name, connection: model.connection, database_name: database_name,with_retries: true).unlock_writes }
  • !!! Panic button if there are too many write errors: VERBOSE=true ./bin/rake gitlab:db:unlock_writes This will remove all the triggers and all tables will be accessible on all databases, this must be avoided. We need to re-instantiate the locks later on.
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rafael Henchen