Skip to content

[FF] rollout dependency_scanning_sbom_scan_api

Summary

This issue is to roll out the feature on production, that is currently behind the dependency_scanning_sbom_scan_api feature flag.

Owners

  • Most appropriate Slack channel to reach out to: #g_ast_composition_analysis
  • Best individual to reach out to: @gonzoyumo

Expectations

What are we expecting to happen?

When the feature flag is enabled, new API endpoints are available to submit an SBOM document to be scanned for vulnerabilities. This workflow is internal and the API only allows requests authenticated with a valid CI job token.

During the rollout, the new DS Analyzer must also explicitely be configured to use this API. Once we've validated the reliability and stability of this new feature, the feature flag will be removed and the behavior will be enabled by default in the analyzer.

What can go wrong and how would we detect it?

Kibana dashboard: https://log.gprd.gitlab.net/app/r/s/fhYY5

  1. The ProcessSbomScanWorker currently uses high urgency setting. This means it must follow these requirements:
  • The median job execution time should be less than 1 second.
  • 99% of jobs should complete within 10 seconds.

This can be verified on the kibana dashboard.

  1. The DestroyExpiredSbomScansWorker has a throttling mechanism to spread the clean up workload accross several hours. Before enabling the feature flag by default, we must verify this is performing as expected and that the files and records are correctly cleaned up.

  2. The API endpoints are rate limited. There is a soft limit set at 50 scans per hour per project. Beyond that number, scans are still executed but on a different worker ProcessSbomScanThrottledWorker with lower urgency. There are also hard limit set per project to 400 uploads per hour and 800 downloads per hour. During the first steps of the rollout hard limit causing 429 will not impact customers as we silently skip the new scan and fallback to the existing implementation.

API requests can be verified on the kibana dashboard.

API Rate limit utilization ratio can also be checked on grafana: https://dashboards.gitlab.net/goto/u-AzuGqNg?orgId=1


If necessary, the sidekiq workers can be deferred or dropped, see https://docs.gitlab.com/development/feature_flags/#controlling-sidekiq-worker-behavior-with-feature-flags

Rollout Steps

See custom rollout plan in #551861 (comment 2774664518)

Note: Please make sure to run the chatops commands in the Slack channel that gets impacted by the command.

Rollout on non-production environments

  • Verify the MR with the feature flag is merged to master and has been deployed to non-production environments with /chatops run auto_deploy status <merge-commit-of-your-feature>
  • Deploy the feature flag at a percentage (recommended percentage: 50%) with /chatops run feature set <feature-flag-name> <rollout-percentage> --actors --dev --pre --staging --staging-ref
  • Monitor that the error rates did not increase (repeat with a different percentage as necessary).
  • Enable the feature globally on non-production environments with /chatops run feature set <feature-flag-name> true --dev --pre --staging --staging-ref
  • Verify that the feature works as expected. The best environment to validate the feature in is staging-canary as this is the first environment deployed to. Make sure you are configured to use canary.
  • If the feature flag causes end-to-end tests to fail, disable the feature flag on staging to avoid blocking deployments.
    • See #e2e-run-staging Slack channel and look for the following messages:
      • test kicked off: Feature flag <feature-flag-name> has been set to true on **gstg**
      • test result: This pipeline was triggered due to toggling of <feature-flag-name> feature flag

If you encounter end-to-end test failures and are unable to diagnose them, you may reach out to the #s_developer_experience Slack channel for assistance. Note that end-to-end test failures on staging-ref don't block deployments.

Specific rollout on production

For visibility, all /chatops commands that target production must be executed in the #production Slack channel and cross-posted (with the command results) to the responsible team's Slack channel.

  • Ensure that the feature MRs have been deployed to both production and canary with /chatops run auto_deploy status <merge-commit-of-your-feature>
  • Depending on the type of actor you are using, pick one of these options:
    • For project-actor: /chatops run feature set --project=gitlab-org/gitlab,gitlab-org/gitlab-foss,gitlab-com/www-gitlab-com <feature-flag-name> true
    • For group-actor: /chatops run feature set --group=gitlab-org,gitlab-com <feature-flag-name> true
    • For user-actor: /chatops run feature set --user=<gitlab-username-of-dri> <feature-flag-name> true
    • For all internal users: /chatops run feature set --feature-group=gitlab_team_members <feature-flag-name> true
  • Verify that the feature works for the specific actors.

Preparation before global rollout

  • Set a milestone to this rollout issue to signal for enabling and removing the feature flag when it is stable.
  • Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does.
  • Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncall Slack alias.
  • Ensure that documentation exists for the feature, and the version history text has been updated.
  • Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware.
  • Notify the #support_gitlab-com Slack channel and your team channel (more guidance when this is necessary in the dev docs).

Global rollout on production

For visibility, all /chatops commands that target production must be executed in the #production Slack channel and cross-posted (with the command results) to the responsible team's Slack channel.

(Optional) Release the feature with the feature flag

WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.

See instructions if you're sure about enabling the feature globally through the feature flag definition

If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:

  • Create a merge request with the following changes.
    • If feature was enabled for various actors, ensure the feature has been enabled globally on production /chatops run feature get <feature-flag-name>. If the feature has not been globally enabled then enable the feature globally using: /chatops run feature set <feature-flag-name> true
    • Set the default_enabled attribute in the feature flag definition to true.
    • Decide which changelog entry is needed.
  • Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> <milestone>
  • After the default-enabling MR has been deployed, clean up the feature flag from all environments by running these chatops command in the #production channel: /chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
  • Close the feature issue to indicate the feature will be released in the current milestone.
  • Set the next milestone to this rollout issue for scheduling the flag removal.
  • (Optional) You can create a separate issue for scheduling the steps below to Release the feature.
    • Set the title to "[FF] <feature-flag-name> - Cleanup".
    • Execute the /copy_metadata <this-rollout-issue-link> quick action to copy the labels from this rollout issue.
    • Link this rollout issue as a related issue.
    • Close this rollout issue.

Release the feature

After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.

You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.

  • Create a merge request to remove the <feature-flag-name> feature flag. Ask for review/approval/merge as usual. The MR should include the following changes:
    • Remove all references to the feature flag from the codebase.
    • Remove the YAML definitions for the feature from the repository.
  • Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post: /chatops run release check <merge-request-url> <milestone>
  • Close the feature issue to indicate the feature will be released in the current milestone.
  • Once the cleanup MR has been deployed to production, clean up the feature flag from all environments by running these chatops command in #production channel: /chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
  • Close this rollout issue.

Rollback Steps

  • This feature can be disabled on production by running the following Chatops command:
/chatops run feature set <feature-flag-name> false
  • Disable the feature flag on non-production environments:
/chatops run feature set <feature-flag-name> false --dev --pre --staging --staging-ref
  • Delete feature flag from all environments:
/chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
Edited by Olivier Gonzalez