Skip to content

Migrate ClickHouse staging data to the new instance

Production Change

Change Summary

Related OKR: https://gitlab.com/gitlab-com/gitlab-OKRs/-/work_items/6669

We use ClickHouse Cloud instances for implementing analytical features within the GitLab application. We have a ClickHouse service configured for staging. Unfortunately, when we set up the service, us-east1 was not available so the instance was set up in us-central1. To reduce the latency between GitLab application servers and ClickHouse we would like to migrate the data to a newly created ClickHouse server.

I'd suggest to run the migration synchronously via a zoom call.

High-level steps:

  1. Disable data-ingestion related feature flags so the source DB won't get new writes.
  2. Invoke the migrator script on a node where rails console is available to copy data from the source DB to the target DB.
  3. Re-configure the staging ClickHouse configuration variables to point to the target DB. (https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/8e3d114c81a043d6055b032b93292f2866486cb8/releases/gitlab/values/gstg.yaml.gotmpl#L1784)
  4. Verify the migrated data.
  5. Enable data-ingestion related feature flags.

Prerequisites:

  • Secrets, usernames and hosts are known.
  • Connectivity between the two DBs is working (from source to target).
  • The rails application is available on the host (invoking the rails runner command works).
  • Both DB has the same DB structure (tables, materialized views). ( @ahegyi ensures this prior the migration)

Change Details

  1. Services Impacted - ServiceWeb ServiceSidekiq
  2. Change Technician - @jcstephenson
  3. Change Reviewer - @gsgl
  4. Time tracking - Time, in minutes, needed to execute all change steps, including rollback
  5. Downtime Component - If there is a need for downtime, include downtime estimate here

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 30 minutes

  • Set label changein-progress /label ~change::in-progress

  • Enable the suspend_click_house_data_ingestion ops feature flag on staging only (#staging slack channel).

       /chatops run feature set suspend_click_house_data_ingestion true --staging
  • @ahegyi, verifies that no data ingestion happens.

  • Start a console session to a node where rails console is available.

  • Clone the migrator script and invoke it in a screen or tmux session. On staging the script will finish in a few minutes (credentials will be provided via shared secrets). @ahegyi provides a shared note via 1password so we can easily invoke it in rails console

  • @ahegyi verifies the data by counting records on both DBs

    SELECT *
    FROM
        (
            (
                SELECT
                    'events' as table,
                    (
                        select
                            count(*)
                        from
                            events final
                    ) as count
            )
            UNION ALL
                (
                    SELECT
                        'ci_finished_builds' as table,
                        (
                            select
                                count(*)
                            from
                                ci_finished_builds final
                        ) as count
                )
            UNION ALL
                (
                    SELECT
                        'audit_events' as table,
                        (
                            select
                                count(*)
                          from
                              audit_events final
                      ) as count
              )
          UNION ALL
              (
                  SELECT
                      'sync_cursors' as table,
                      (
                          select
                              count(*)
                          from
                              sync_cursors final
                      ) as count
              )
          UNION ALL
              (
                  SELECT
                      'code_suggestion_usages' as table,
                      (
                          select
                              count(*)
                          from
                              code_suggestion_usages final
                      ) as count
              )
      ) data
      ORDER BY
      table
  • Reconfigure staging to use the new ClickHouse server (https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/8e3d114c81a043d6055b032b93292f2866486cb8/releases/gitlab/values/gstg.yaml.gotmpl#L1784)

  • @ahegyi verifies the connectivity by visiting the contribution analytics page. (https://staging.gitlab.com/groups/gitlab-org/-/contribution_analytics)

  • Disable the suspend_click_house_data_ingestion ops feature flag on staging (#staging slack channel).

       /chatops run feature delete suspend_click_house_data_ingestion true --staging
  • @ahegyi verifies that the background jobs are working via kibana.

  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10 minutes

The migration can be stopped at any step. If something does not work after the new ClickHouse database is set up, we need to reconfigure the old values.

Monitoring

Key metrics to observe

The disabled workers can be verified via kibana, once the suspend_click_house_data_ingestion feature flag is enabled we shouldn't see jobs scheduled:

  1. Go to kibana
  2. Select pubsub-sidekiq-inf-gprd* index
  3. Filter for json.class.name is ClickHouse::EventsSyncWorker
  4. You shouldn't see a new job appearing (worker is scheduled in every 3 minutes)

Apart from this, there is not much to monitor. @ahegyi, will keep an eye on ClickHouse metrics on ClickHouse Cloud console.

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Adam Hegyi