[Feature flag] Rollout of `ci_unlock_pipelines_queue`
Change Management Issue: gitlab-com/gl-infra/production#16451 (closed)
Summary
This issue is to rollout the pipeline unlock mechanism that is currently behind the ci_unlock_pipelines_queue
feature flag.
At the same time, we also need to slowly rollout the separate feature flag ci_unlock_pipelines_high|ci_unlock_pipelines_medium|ci_unlock_pipelines
which controls the rate of the Ci::UnlockPipelinesInQueueWorker
limited capacity worker.
Owners
- Team: grouppipeline security
- Most appropriate slack channel to reach out to:
#g_pipeline_security
- Best individual to reach out to: @iamricecake
- PM: @jocelynjane
Stakeholders
Expectations
What are we expecting to happen?
This has no user facing change or any behavior change with regard to unlocking pipelines. This is a performance improvement to make unlocking pipelines scalable so we can move on to fixing the bugs related to it.
When is the feature viable?
When a project uses CI/CD and has pipelines.
This new mechanism will take effect on a project as its pipelines are unlocked if we have enabled ci_unlock_pipelines_queue
for it. But this would only enqueue pipelines to be unlocked into a Redis SortedSet. The actual unlock and DB writes will happen in the Ci::UnlockPipelinesInQueueWorker
limited capacity worker, which would need either ci_unlock_pipelines_high|ci_unlock_pipelines_medium|ci_unlock_pipelines
enabled.
What might happen if this goes wrong?
If we see that redis can't keep up with the amount of pipelines being enqueued and added into the sorted set, we should disable the feature flag ci_unlock_pipelines_queue
:
/chatops run feature set ci_unlock_pipelines_queue false
If we see a lot of Sidekiq errors happening in Ci::Refs::UnlockPreviousPipelinesWorker
, we should disable the feature flag ci_unlock_pipelines_queue
:
/chatops run feature set ci_unlock_pipelines_queue false
If we see that redis can't keep up with the amount of ZPOPMIN
calls, which happens as the worker picks up an entry from the queue to process, then we should try to lower the rate of the worker to the lowest value:
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines true
If this does not help then we should disable the limited capacity worker and the queue:
/chatops run feature set ci_unlock_pipelines_queue false
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines false
If we see a lot of Sidekiq errors from Ci::UnlockPipelinesInQueueWorker
, we should disable the limited capacity worker and the queue:
/chatops run feature set ci_unlock_pipelines_queue false
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines false
If we see database saturation due to lots of writes on ci_pipelines
and ci_job_artifacts
, which may cause lots of dead tuples and replication lag similar to what happened in gitlab-com/gl-infra/production#8621 (closed), then we should try to lower the rate of the worker to the lowest value:
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines true
If this does not help then we should disable the limited capacity worker and the queue:
/chatops run feature set ci_unlock_pipelines_queue false
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines false
What can we monitor to detect problems with this?
Consider mentioning checks for 5xx errors or other anomalies like an increase in redirects (302 HTTP response status)
What can we check for monitoring production after rollouts?
- New Unlock Pipelines Mechanism Kibana Dashboard
- Redis Grafana Dashboard
-
Grafana
Ci::Refs::UnlockPreviousPipelinesWorker
Overview -
Grafana
Ci::UnlockPipelinesInQueueWorker
Overview -
Kibana Logs for
Ci::Refs::UnlockPreviousPipelinesWorker
- Observe the following metadata attributes:
- total_pending_entries
- If there is a continuous increase of this number for a long time, consider increasing the limited capacity worker rate.
- total_new_entries
- total_pending_entries
- Observe the following metadata attributes:
-
Kibana Logs for
Ci::UnlockPipelinesInQueueWorker
- Observe the following metadata attributes:
- exec_timeout
- This is not necessarily a bad thing, the worker is designed to pick up where it left off.
- We can observe this in correlation with other factors to see if the workers can keep up with the amount of enqueued pipelines.
- unlocked_job_artifacts
- unlocked_pipeline_artifacts
- exec_timeout
- Observe the following metadata attributes:
- Grafana Sidekiq Overview
- Grafana PostgreSQL Overview
Rollout Steps
Note: Please make sure to run the chatops commands in the slack channel that gets impacted by the command.
Rollout on non-production environments
-
Verify the MR with the feature flag is merged to master. - Verify that the feature MRs have been deployed to non-production environments with:
-
/chatops run auto_deploy status <merge-commit-of-your-feature>
-
-
Enable the feature globally on non-production environments. -
/chatops run feature set ci_unlock_pipelines_queue true --dev --staging --staging-ref
-
/chatops run feature set ci_unlock_pipelines true --dev --staging --staging-ref
- If the feature flag causes QA end-to-end tests to fail:
-
Disable the feature flag on staging to avoid blocking deployments.
-
-
-
Verify that the feature works as expected. Posting the QA result in this issue is preferable. The best environment to validate the feature in is staging-canary as this is the first environment deployed to. Note you will need to make sure you are configured to use canary as outlined here when accessing the staging environment in order to make sure you are testing appropriately.
For assistance with QA end-to-end test failures, please reach out via the #quality
Slack channel. Note that QA test failures on staging-ref don't block deployments.
Specific rollout on production
For visibility, all /chatops
commands that target production should be executed in the #production
slack channel and cross-posted (with the command results) to the responsible team's slack channel (#g_TEAM_NAME
).
- Ensure that the feature MRs have been deployed to both production and canary.
-
/chatops run auto_deploy status <merge-commit-of-your-feature>
-
- Depending on the type of actor you are using, pick one of these options:
- If you're using project-actor, you must enable the feature on these entries:
-
/chatops run feature set --project=gitlab-org/gitlab,gitlab-org/gitlab-foss,gitlab-com/www-gitlab-com ci_unlock_pipelines_queue true
-
/chatops run feature set ci_unlock_pipelines_medium true
- At this point, let's not ramp up the worker's rate yet, let's stick with the medium rate which is
ci_unlock_pipelines_medium
which caps themax_running_jobs
to 10.
- At this point, let's not ramp up the worker's rate yet, let's stick with the medium rate which is
-
- If you're using project-actor, you must enable the feature on these entries:
-
Verify that the feature works on the specific entries. Posting the QA result in this issue is preferable.
Preparation before global rollout
-
Set a milestone to the rollout issue to signal for enabling and removing the feature flag when it is stable. -
Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does. -
Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncall
Slack alias. -
Ensure that documentation has been updated (More info). -
Leave a comment on the feature issue announcing estimated time when this feature flag will be enabled on GitLab.com. -
Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware. -
Notify #support_gitlab-com
and your team channel (more guidance when this is necessary in the dev docs). -
Ensure that the feature flag rollout plan is reviewed by another developer familiar with the domain.
Global rollout on production
For visibility, all /chatops
commands that target production should be executed in the #production
slack channel and cross-posted (with the command results) to the responsible team's slack channel (#g_TEAM_NAME
).
Take note that at this point, we have the worker rate at medium (ci_unlock_pipelines_medium
).
-
Incrementally roll out the feature. -
Between every step wait for at least 15 minutes and monitor the appropriate graphs on https://dashboards.gitlab.net. - If the feature flag in code has an actor, perform actor-based rollout.
-
/chatops run feature set ci_unlock_pipelines_queue <rollout-percentage> --actors
- We want to be conservative here and allot 1 day per increment.
-
-
-
Observe appropriate graphs on https://dashboards.gitlab.net and verify that services are not affected. -
If we think that we can still ramp up the worker rate, we can enable the ci_unlock_pipelines_high
feature flag:/chatops run feature set ci_unlock_pipelines_high
- But if at the medium rate we can see that the pipelines are being unlocked at a good rate, no need to ramp up.
-
-
Leave a comment on the feature issue announcing that the feature has been globally enabled. -
Wait for at least one day for the verification term.
(Optional) Release the feature with the feature flag
If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:
-
Create a merge request with the following changes. Ask for review and merge it. -
Set the default_enabled
attribute in the feature flag definition totrue
. -
Review what warrants a changelog entry and decide if a changelog entry is needed.
-
-
Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post. -
/chatops run release check <merge-request-url> <milestone>
-
-
Consider cleaning up the feature flag from all environments by running these chatops command in #production
channel. Otherwise these settings may override the default enabled.-
/chatops run feature delete <feature-flag-name> --dev --staging --staging-ref --production
-
-
Close the feature issue to indicate the feature will be released in the current milestone. -
Set the next milestone to this rollout issue for scheduling the flag removal. -
(Optional) You can create a separate issue for scheduling the steps below to Release the feature. -
Set the title to "[Feature flag] Cleanup <feature-flag-name>
". -
Execute the /copy_metadata <this-rollout-issue-link>
quick action to copy the labels from this rollout issue. -
Link this rollout issue as a related issue. -
Close this rollout issue.
-
WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.
Release the feature
After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.
You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.
-
Create a merge request to remove <feature-flag-name>
feature flag. Ask for review and merge it.-
Remove all references to the feature flag from the codebase. -
Remove the YAML definitions for the feature from the repository. -
Create a changelog entry.
-
-
Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post. -
/chatops run release check <merge-request-url> <milestone>
-
-
Close the feature issue to indicate the feature will be released in the current milestone. -
Clean up the feature flag from all environments by running these chatops command in #production
channel:-
/chatops run feature delete <feature-flag-name> --dev --staging --staging-ref --production
-
-
Close this rollout issue.
Rollback Steps
-
This feature can be disabled by running the following Chatops command:
/chatops run feature set ci_unlock_pipelines_queue false
/chatops run feature set ci_unlock_pipelines_high false
/chatops run feature set ci_unlock_pipelines_medium false
/chatops run feature set ci_unlock_pipelines false
This page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.