Skip to content

[Feature flag] Rollout of `gitlab_memory_watchdog`

Summary

See #365950 (closed)

We introduced a new memory observation daemon ("memory watchdog" or memwd) in !91910 (merged). It is guarded by 3 different switches:

  1. An environment variable that we are looking to enable on a per-deployment basis on SaaS: GITLAB_MEMORY_WATCHDOG_ENABLED
  2. An ops feature toggle (gitlab_memory_watchdog) that will allow memwd to run, but skip actual checks. This is a safety switch should we ever encounter situations where it is excessively reaping workers.
  3. A second-level ops toggle (enforce_memory_watchdog) that "defuses" the handler we call into when high memory fragmentation is observed. This is useful to respond to high fragmentation events with just log and prometheus events instead of actually reaping workers.

We may remove the second ops toggle once we are happy with the tuning of this component.

Rollout plan

  • Merge !91910 (merged); this is safe to ship since everything is disabled by default.
  • Set GITLAB_MEMORY_WATCHDOG_ENABLED in selected environments. Related MRs:
  • Enable gitlab_memory_watchdog; this will run the daemon logic on those environments that were enabled in the previous step but only log events not kill workers.
  • If necessary, fine-tune memwd config if it's too sensitive or not sensitive enough.
  • Enable enforce_memory_watchdog to let it actually restart workers.

We may have to repeat the last few steps per every deployment we enable. For instance, Sidekiq may require a different set of tuning parameters than Puma.

Owners

  • Team: groupmemory
  • Most appropriate slack channel to reach out to: #g_memory
  • Best individual to reach out to: @mkaeppler
  • PM: @iroussos

Stakeholders

Expectations

What are we expecting to happen?

Nothing should happen since the dameon is disabled on multiple levels. In the case that it has already been enable for some deployments via the env switch, it should start logging and sending events into Prometheus.

When is the feature viable?

When we find that:

  1. Identification of workers to reap via log events shows high correlation with these running high on memory
  2. Workers are neither identified too frequently nor too infrequently (too frequent would be dozens of times a minute, too infrequent would be several times a day or not at all)

What might happen if this goes wrong?

Outside of unknown bugs, only the last step can go wrong, which is when we start killing processes. This is why we run it in friendly mode first so we can tune it not to be too aggressive.

What can we monitor to detect problems with this?

Consider mentioning checks for 5xx errors or other anomalies like an increase in redirects (302 HTTP response status)

See gitlab-com/gl-infra/production#7428 (closed)

What can we check for monitoring production after rollouts?

See gitlab-com/gl-infra/production#7428 (closed)

Rollout Steps

Rollout on non-production environments

  • Ensure that the feature MRs have been deployed to non-production environments.
    • /chatops run auto_deploy status <merge-commit-of-your-feature>
  • Enable the feature globally on non-production environments.
    • /chatops run feature set <feature-flag-name> true --dev --staging --staging-ref
  • Verify that the feature works as expected. Posting the QA result in this issue is preferable. The best environment to validate the feature in is staging-canary as this is the first environment deployed to. Note you will need to make sure you are configured to use canary as outlined here when accessing the staging environment in order to make sure you are testing appropriately.

Specific rollout on production

  • Ensure that the feature MRs have been deployed to both production and canary.
    • /chatops run auto_deploy status <merge-commit-of-your-feature>
  • If you're using project-actor, you must enable the feature on these entries:
    • /chatops run feature set --project=gitlab-org/gitlab,gitlab-org/gitlab-foss,gitlab-com/www-gitlab-com <feature-flag-name> true
  • If you're using group-actor, you must enable the feature on these entries:
    • /chatops run feature set --group=gitlab-org,gitlab-com <feature-flag-name> true
  • If you're using user-actor, you must enable the feature on these entries:
    • /chatops run feature set --user=<your-username> <feature-flag-name> true
  • Verify that the feature works on the specific entries. Posting the QA result in this issue is preferable.

Preparation before global rollout

  • Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does.
  • Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncall Slack alias.
  • Ensure that documentation has been updated (More info).
  • Announce on the feature issue an estimated time this will be enabled on GitLab.com.
  • Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware.
  • Notify #support_gitlab-com and your team channel (more guidance when this is necessary in the dev docs).

Global rollout on production

For visibility, all /chatops commands that target production should be executed in the #production slack channel and cross-posted (with the command results) to the responsible team's slack channel (#g_TEAM_NAME).

(Optional) Release the feature with the feature flag

If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:

  • Create a merge request with the following changes. Ask for review and merge it.
  • Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post.
    • /chatops run release check <merge-request-url> <milestone>
  • Consider cleaning up the feature flag from all environments by running these chatops command in #production channel. Otherwise these settings may override the default enabled.
    • /chatops run feature delete <feature-flag-name> --dev --staging --staging-ref --production
  • Close the feature issue to indicate the feature will be released in the current milestone.
  • Set the next milestone to this rollout issue for scheduling the flag removal.
  • (Optional) You can create a separate issue for scheduling the steps below to Release the feature.
    • Set the title to "[Feature flag] Cleanup <feature-flag-name>".
    • Execute the /copy_metadata <this-rollout-issue-link> quick action to copy the labels from this rollout issue.
    • Link this rollout issue as a related issue.
    • Close this rollout issue.

WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.

Release the feature

After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.

You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.

  • Create a merge request to remove <feature-flag-name> feature flag. Ask for review and merge it.
    • Remove all references to the feature flag from the codebase.
    • Remove the YAML definitions for the feature from the repository.
    • Create a changelog entry.
  • Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post.
    • /chatops run release check <merge-request-url> <milestone>
  • Close the feature issue to indicate the feature will be released in the current milestone.
  • If not already done, clean up the feature flag from all environments by running these chatops command in #production channel:
    • /chatops run feature delete <feature-flag-name> --dev --staging --staging-ref --production
  • Close this rollout issue.

Rollback Steps

  • This feature can be disabled by running the following Chatops command:
    • Only log violations: /chatops run feature set enforce_memory_watchdog false
    • Do not act at all: /chatops run feature set gitlab_memory_watchdog false

To not even start the watchdog, the GITLAB_MEMORY_WATCHDOG_ENABLED environment variable needs to be reset.

/chatops run feature set <feature-flag-name> false
Edited by Matthias Käppler