[Feature flag] Rollout of `dependency_scanning_on_advisory_ingestion`
Summary
This issue tracks the rollout of Add worker to scan newly ingested advisories (#371063 - closed) on production,
that is currently behind the dependency_scanning_on_advisory_ingestion
feature flag.
When that feature is enabled, the backend reacts to the ingestion of a new advisory by scanning existing projects, and creating vulnerabilities in the affected one. See Trigger vulnerability scans on advisory changes (&10025 - closed)
The backend only trigger scans when ingesting advisories published less than 14 days ago.
This feature is in Experiment
, and the backend only scans projects where it's enabled. See #423903 (closed)
Owners
- Team: gitlab-org/secure/composition-analysis-be
- Most appropriate slack channel to reach out to:
#g_secure-composition-analysis
- Best individual to reach out to: @fcatteau
- PM: @smeadzinger
Stakeholders
- The Support Team
- groupthreat insights
Expectations
What are we expecting to happen?
When that feature is enabled, projects affected by newly ingested advisories automatically get new vulnerability findings. These vulnerability findings are visible on the vulnerability report page.
- We expect to ingest a maximum of ~850 advisories within a single day. Year to date we've had a maximum of 81 advisories ingested in a single day. Advisories are scheduled for export every 24 hours. The following are some extra stats about the advisory exports in 2023 so far.
- The average amount of advisories in a day is 17.
- The median amount is 10 advisories a day.
- The most frequent amount seen (mode) was 5 advisories a day.
- To prevent scanning for older vulnerabilities, as would be in the case of the initial sync, we've scoped the scanner so that it only triggers for advisories published within the last 14 days.
- For advisories that affect many projects, for example 10k sbom occurrences, we expect the initial query to have acceptable performance. When we tested this with 30k SBOM occurrences, half of which were affected, we timed the job to finish in approximately 37mins. See #423578 (comment 1552902780).
How can it be tested?
This can be tested using existing projects:
- Check the latest NDJSON file created by the license-exporter for a given PURL type. For instance, as of today the latest NDJSON file for
npm
is https://storage.googleapis.com/prod-export-advisory-bucket-1a6c642fc4de57d4/v2/npm/1694502076/000000000.ndjson. See https://console.cloud.google.com/storage/browser/prod-export-advisory-bucket-1a6c642fc4de57d4/v2/npm - In this NDJSON file, look for JSON objects such as
.advisory.published_date
is less than 14 days ago. - Search for projects that have SBOM occurrences that match
.packages[0].name
and.packages[0].purl_type
, and where the feature (currently inExperiment
) is enabled. - Check the vulnerabilities and vulnerability findings of these projects using SQL queries or in the vulnerability report.
- JSON objects identified above have corresponding vulnerabilities and findings.
- The scanner of these vulnerabilities and findings is TODO.
Test project matching upcoming advisories
Alternatively, this can be tested by creating affected projects that match the advisories that are going to be ingested by the backend. This doesn't require read access to the production database.
- Look for upcoming advisories that are going to trigger scans during ingestion. You can check new NDJSON files in the advisory export bucket or for new YAML files in gemnasium-db.
- Create a project with dependencies that matches these advisories.
- Enable Dependency Scanning in that project.
- Check the vulnerability report.
- At the moment there are no vulnerabilities that match the upcoming advisories.
- Wait for the backend to ingest the advisories.
- Check the vulnerability report.
- There are new vulnerabilities matching the newly ingested advisories.
- The reported scanner corresponds to CVS.
- The detection date (first column) is the one of the ingestion.
When is the feature viable?
-
The package_metadata_advisory_sync
flag has been enabled. -
[BE] Only scan projects for which continuous vu... (#424629 - closed) and [FE] - Add a setting to toggle CVS feature in t... (#423903 - closed) have been merged and deployed. -
Enable the setting in the Security configuration page. We are targeting a Experiment
release, so the setting will be off by default.
What might happen if this goes wrong?
- The queries issued by
Gitlab::VulnerabilityScanning::AdvisoryScanner
puts excessive pressure on Postgres.- We have attempted to mitigate this by scanning the projects in batches, and are also scoping this down to the projects that have the
dependency_scanning
feature enabled.
- We have attempted to mitigate this by scanning the projects in batches, and are also scoping this down to the projects that have the
- The resources consumed by Sidekiq workers are excessive.
- There has been some testing done locally to simulate scanning a large amount of projects at once. See #423578 (comment 1552895263). The amount of resource consumption observed was very low, and the amount of jobs in the queue is constrained to a max of ten.
What can we monitor to detect problems with this?
Consider mentioning checks for 5xx errors or other anomalies like an increase in redirects (302 HTTP response status)
What can we check for monitoring production after rollouts?
Consider adding links to check for Sentry errors, Production logs for 5xx, 302s, etc.
- Grafana
- Execution Rate (RPS)
- Error Ratio
- Jobs Enqueued
- Queue Length
- SQL duration
- SQL queries rate
- SQL transaction holding duration
- Thanos
-
Sidekiq execution SLIs; see graphs for
AdvisoriesSyncWorker
andPackageMetadata::AdvisoryScanWorker
- gitlab_sli_sidekiq_execution_apdex_total
- gitlab_sli_sidekiq_execution_apdex_success_total
- gitlab_sli_sidekiq_execution_error_total
- gitlab_sli_sidekiq_execution_total
-
generic Sidekiq metrics
- sidekiq_jobs_cpu_seconds
- sidekiq_jobs_db_seconds
- sidekiq_redis_requests_duration_seconds
- sidekiq_jobs_retried_total
- sidekiq_jobs_interrupted_total
- sidekiq_redis_requests_total
- sidekiq_running_jobs
- sidekiq_concurrency
- sidekiq_mem_total_bytes
-
Sidekiq execution SLIs; see graphs for
- Kibana
- Monitor the P95 of the cpu and duration in seconds. https://log.gprd.gitlab.net/app/r/s/a05O4
- DB duration of advisory scan, which might create vulnerabilities https://log.gprd.gitlab.net/app/r/s/A5VFK
- DB duration of advisories sync, which might trigger scans https://log.gprd.gitlab.net/app/r/s/EN2wz
Rollout Steps
Note: Please make sure to run the chatops commands in the slack channel that gets impacted by the command.
Rollout on non-production environments
-
Verify the MR with the feature flag is merged to master. - Verify that the feature MRs have been deployed to non-production environments with:
-
/chatops run auto_deploy status <merge-commit-of-your-feature>
-
-
Enable the feature globally on non-production environments. -
/chatops run feature set dependency_scanning_on_advisory_ingestion true --dev --staging --staging-ref
- If the feature flag causes QA end-to-end tests to fail:
-
Disable the feature flag on staging to avoid blocking deployments.
-
-
-
Verify that the feature works as expected. Posting the QA result in this issue is preferable. The best environment to validate the feature in is staging-canary as this is the first environment deployed to. Note you will need to make sure you are configured to use canary as outlined here when accessing the staging environment in order to make sure you are testing appropriately.
For assistance with QA end-to-end test failures, please reach out via the #quality
Slack channel. Note that QA test failures on staging-ref don't block deployments.
Specific rollout on production
For visibility, all /chatops
commands that target production should be executed in the #production
slack channel and cross-posted (with the command results) to the responsible team's slack channel (#g_TEAM_NAME
).
- Ensure that the feature MRs have been deployed to both production and canary.
-
/chatops run auto_deploy status <merge-commit-of-your-feature>
-
- Depending on the type of actor you are using, pick one of these options:
- If you're using project-actor, you must enable the feature on these entries:
-
/chatops run feature set --project=gitlab-org/gitlab,gitlab-org/gitlab-foss,gitlab-com/www-gitlab-com dependency_scanning_on_advisory_ingestion true
-
- If you're using group-actor, you must enable the feature on these entries:
-
/chatops run feature set --group=gitlab-org,gitlab-com dependency_scanning_on_advisory_ingestion true
-
- If you're using user-actor, you must enable the feature on these entries:
-
/chatops run feature set --user=<your-username> dependency_scanning_on_advisory_ingestion true
-
- If you're using project-actor, you must enable the feature on these entries:
-
Verify that the feature works on the specific entries. Posting the QA result in this issue is preferable.
Preparation before global rollout
-
Set a milestone to the rollout issue to signal for enabling and removing the feature flag when it is stable. -
Check if the feature flag change needs to be accompanied with a change management issue. Cross link the issue here if it does. -
Ensure that you or a representative in development can be available for at least 2 hours after feature flag updates in production. If a different developer will be covering, or an exception is needed, please inform the oncall SRE by using the @sre-oncall
Slack alias. -
Ensure that documentation has been updated (More info). -
Leave a comment on [the feature issue][main-issue] announcing estimated time when this feature flag will be enabled on GitLab.com. -
Ensure that any breaking changes have been announced following the release post process to ensure GitLab customers are aware. -
Notify #support_gitlab-com
and your team channel (more guidance when this is necessary in the dev docs). -
Ensure that the feature flag rollout plan is reviewed by another developer familiar with the domain.
Global rollout on production
For visibility, all /chatops
commands that target production should be executed in the #production
slack channel and cross-posted (with the command results) to the responsible team's slack channel (#g_TEAM_NAME
).
-
Incrementally roll out the feature. -
Between every step wait for at least 15 minutes and monitor the appropriate graphs on https://dashboards.gitlab.net. - If the feature flag in code has an actor, perform actor-based rollout.
- [-]
/chatops run feature set dependency_scanning_on_advisory_ingestion <rollout-percentage> --actors
(not applicable)
- [-]
- If the feature flag in code does NOT have an actor, perform time-based rollout (random rollout).
-
/chatops run feature set dependency_scanning_on_advisory_ingestion <rollout-percentage> --random
-
- Enable the feature globally on production environment.
-
/chatops run feature set dependency_scanning_on_advisory_ingestion true
-
-
-
Observe appropriate graphs on https://dashboards.gitlab.net and verify that services are not affected. -
Leave a comment on [the feature issue][main-issue] announcing that the feature has been globally enabled. -
Wait for at least one day for the verification term.
(Optional) Release the feature with the feature flag
If you're still unsure whether the feature is deemed stable but want to release it in the current milestone, you can change the default state of the feature flag to be enabled. To do so, follow these steps:
-
Create a merge request with the following changes. Ask for review and merge it. -
Set the default_enabled
attribute in the feature flag definition totrue
. -
Review what warrants a changelog entry and decide if a changelog entry is needed.
-
-
Ensure that the default-enabling MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post. -
/chatops run release check <merge-request-url> <milestone>
-
-
Consider cleaning up the feature flag from all environments by running these chatops command in #production
channel. Otherwise these settings may override the default enabled.-
/chatops run feature delete dependency_scanning_on_advisory_ingestion --dev --staging --staging-ref --production
-
-
Close [the feature issue][main-issue] to indicate the feature will be released in the current milestone. -
Set the next milestone to this rollout issue for scheduling the flag removal. -
(Optional) You can create a separate issue for scheduling the steps below to Release the feature. -
Set the title to "[Feature flag] Cleanup dependency_scanning_on_advisory_ingestion
". -
Execute the /copy_metadata <this-rollout-issue-link>
quick action to copy the labels from this rollout issue. -
Link this rollout issue as a related issue. -
Close this rollout issue.
-
WARNING: This approach has the downside that it makes it difficult for us to clean up the flag. For example, on-premise users could disable the feature on their GitLab instance. But when you remove the flag at some point, they suddenly see the feature as enabled and they can't roll it back to the previous behavior. To avoid this potential breaking change, use this approach only for urgent matters.
Release the feature
After the feature has been deemed stable, the clean up should be done as soon as possible to permanently enable the feature and reduce complexity in the codebase.
You can either create a follow-up issue for Feature Flag Cleanup or use the checklist below in this same issue.
-
Create a merge request to remove dependency_scanning_on_advisory_ingestion
feature flag. Ask for review and merge it.-
Remove all references to the feature flag from the codebase. -
Remove the YAML definitions for the feature from the repository. -
Create a changelog entry.
-
-
Ensure that the cleanup MR has been included in the release package. If the merge request was deployed before the monthly release was tagged, the feature can be officially announced in a release blog post. -
/chatops run release check <merge-request-url> <milestone>
-
-
Close [the feature issue][main-issue] to indicate the feature will be released in the current milestone. -
Clean up the feature flag from all environments by running these chatops command in #production
channel:-
/chatops run feature delete dependency_scanning_on_advisory_ingestion --dev --staging --staging-ref --production
-
-
Close this rollout issue.
Rollback Steps
-
This feature can be disabled by running the following Chatops command:
/chatops run feature set dependency_scanning_on_advisory_ingestion false
-
If the previous step does not remove the queued jobs quickly enough, reach out to an SRE to remove the queued jobs manually. See the following from the teleport rails console connection guide
It is worth noting that the current access granted is a read-only access, if you need to perform write operations to the production environment, then declare a change in
#production
slack channel using the/change declare
command, after filling the steps and other details, an SRE should be able to execute the change for you.
queue = Sidekiq::Queue.new('package_metadata_advisory_scan')
queue.each { |job| job.delete }