[Feature flag] Rollout of `expiring_pats_30d_60d_notifications`
Summary
This issue is to roll out the 30d & 60d expiring PAT notifications feature on production, that is currently behind the expiring_pats_30d_60d_notifications feature flag.
Owners
- Most appropriate Slack channel to reach out to:
#g_govern_authentication - Best individual to reach out to: @atevans
Expectations
What are we expecting to happen?
In addition to the current, 7-day notifications for expiring PATs, we should see 30d and 60d notification emails as well. No other change in behavior.
What can go wrong and how would we detect it?
This change alters the behavior of PersonalAccessTokens::ExpiringWorker . The worker is a cron job that runs at 01:00 UTC.
The PersonalAccessTokens::ExpiringWorker operates on three intervals: :seven_days , :thirty_days and :sixty_days . When the feature flag is enabled, the worker will process the :thirty_days and :sixty_days intervals using the same code paths as the current :seven_days interval.
We expect a significant increase in email sends, especially the first day after the flag is rolled out. All tokens expiring in 30 or 60 days will need to be notified. After the first day, the volume of email sends should reduce.
We have done the best we can, gotten extensive database and backend reviews on the performance of the queries and worker. But in case of unknown unknowns, possible problems would be:
- Notification emails going out to users who do not have expiring access tokens
- Notifications about expiring access tokens, but when checked, the expiry date does not match the window specified in the email
- Database queries in
PersonalAccessTokens::ExpiringWorkertiming out, causing the worker to fail and not send any email notifications - Database queries in
PersonalAccessTokens::ExpiringWorkerusing excess db resources, causing slowdown in other parts of the app or degradation of db services - Multiple copies of
PersonalAccessTokens::ExpiringWorkerrunning concurrently, causing duplicate email notifications to be sent - Increase in errors coming from
PersonalAccessTokens::ExpiringWorker - Notification emails about token expiry failing to go out, in the case that the worker does not complete as expected.
The worker has a max runtime of 3 minutes, after which it will re-enqueue itself with a delay. If the feature flag is turned off, the next run of the worker will not process the :thirty_days or :sixty_days intervals.
Turning the flag off should mitigate any problems associated with turning it on within a few minutes.
If we have further issues despite turning the FF off, the cron job can be disabled from the admin Sidekiq page. Find the personal_access_tokens_expiring_worker on /admin/sidekiq/cron and press the "Disable" button. SRE and Support teams can help with this.
If we need to purge the job after it has re-enqueued itself, there should be only one job to purge. It runs under the cronjob queue. It should be doable from the Sidekiq UI.
The job should not retry itself on errors; instead it will be run-run the next day.
Rollout Steps
Note: Please make sure to run the chatops commands in the Slack channel that gets impacted by the command.
Rollout on non-production environments
- Verify the MR with the feature flag is merged to
masterand have been deployed to non-production environments with/chatops run auto_deploy status <merge-commit-of-your-feature>
-
Deploy the feature flag at a percentage (recommended percentage: 50%) with /chatops run feature set expiring_pats_30d_60d_notifications <rollout-percentage> --actors --dev --pre --staging --staging-ref -
Monitor that the error rates did not increase (repeat with a different percentage as necessary).
-
Enable the feature globally on non-production environments with /chatops run feature set expiring_pats_30d_60d_notifications true --dev --pre --staging --staging-ref -
Verify that the feature works as expected. The best environment to validate the feature in is staging-canaryas this is the first environment deployed to. Make sure you are configured to use canary. -
If the feature flag causes end-to-end tests to fail, disable the feature flag on staging to avoid blocking deployments. - See
#qa-stagingSlack channel and look for the following messages:- test kicked off:
Feature flag expiring_pats_30d_60d_notifications has been set to true on **gstg** - test result:
This pipeline was triggered due to toggling of expiring_pats_30d_60d_notifications feature flag
- test kicked off:
- See
For assistance with end-to-end test failures, please reach out via the #test-platform Slack channel. Note that end-to-end test failures on staging-ref don't block deployments.
Specific rollout on production
For visibility, all /chatops commands that target production should be executed in the #production Slack channel
and cross-posted (with the command results) to the responsible team's Slack channel.
- Ensure that the feature MRs have been deployed to both production and canary with
/chatops run auto_deploy status <merge-commit-of-your-feature>
Preparation before global rollout
-
Set a milestone to this rollout issue to signal for enabling and removing the feature flag when it is stable. -
Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does. -
Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncallSlack alias. -
Ensure that documentation exists for the feature, and the version history text has been updated. -
Leave a comment on the feature issue announcing estimated time when this feature flag will be enabled on GitLab.com. -
Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware. -
Notify the #support_gitlab-comSlack channel and your team channel (more guidance when this is necessary in the dev docs). -
Ensure that the feature flag rollout plan is reviewed by another developer familiar with the domain.
Global rollout on production
For visibility, all /chatops commands that target production should be executed in the #production Slack channel
and cross-posted (with the command results) to the responsible team's Slack channel (#g_govern_authentication).
-
Enable the feature globally on production environment: /chatops run feature set expiring_pats_30d_60d_notifications true -
Observe appropriate graphs on https://dashboards.gitlab.net and verify that services are not affected. -
Leave a comment on the feature issue announcing that the feature has been globally enabled. -
Wait for at least one day for the verification term.
(Optional) Release the feature with the feature flag
WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.
See instructions if you're sure about enabling the feature globally through the feature flag definition
If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:
-
Create a merge request with the following changes. Ask for review and merge it. -
If feature was enabled for various actors, ensure the feature has been enabled globally on production /chatops run feature get expiring_pats_30d_60d_notifications. If the feature has not been globally enabled then enable the feature globally using:/chatops run feature set expiring_pats_30d_60d_notifications true -
Set the default_enabledattribute in the feature flag definition totrue. -
Decide which changelog entry is needed.
-
-
Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> 17.4 -
Consider cleaning up the feature flag from all environments by running these chatops command in #productionchannel. Otherwise these settings may override the default enabled:/chatops run feature delete expiring_pats_30d_60d_notifications --dev --pre --staging --staging-ref --production -
Close the feature issue to indicate the feature will be released in the current milestone. -
Set the next milestone to this rollout issue for scheduling the flag removal. -
(Optional) You can create a separate issue for scheduling the steps below to Release the feature. -
Set the title to "[Feature flag] Cleanup expiring_pats_30d_60d_notifications". -
Execute the /copy_metadata <this-rollout-issue-link>quick action to copy the labels from this rollout issue. -
Link this rollout issue as a related issue. -
Close this rollout issue.
-
Release the feature
After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.
You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.
-
Create a merge request to remove the expiring_pats_30d_60d_notificationsfeature flag. Ask for review/approval/merge as usual. The MR should include the following changes:- Remove all references to the feature flag from the codebase.
- Remove the YAML definitions for the feature from the repository.
- Create a changelog entry.
-
Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> 17.4 -
Close the feature issue to indicate the feature will be released in the current milestone. -
Clean up the feature flag from all environments by running these chatops command in #productionchannel:/chatops run feature delete expiring_pats_30d_60d_notifications --dev --pre --staging --staging-ref --production -
Close this rollout issue.
Rollback Steps
-
This feature can be disabled on production by running the following Chatops command:
/chatops run feature set expiring_pats_30d_60d_notifications false
-
Disable the feature flag on non-production environments:
/chatops run feature set expiring_pats_30d_60d_notifications false --dev --pre --staging --staging-ref
-
Delete feature flag from all environments:
/chatops run feature delete expiring_pats_30d_60d_notifications --dev --pre --staging --staging-ref --production