delivery issueshttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues2024-03-28T20:08:12Zhttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20111Rollback Discovery for auto-deploy (Dedicated/GET tooling)2024-03-28T20:08:12ZJohn SkarbekRollback Discovery for auto-deploy (Dedicated/GET tooling)### Problem Statement
We do not currently have a rollback recovery method for non-Geo installations using the Dedicated Tooling.
Existing reference material:
* [Requirement of Geo but not considered Production Ready](https://gitlab-co...### Problem Statement
We do not currently have a rollback recovery method for non-Geo installations using the Dedicated Tooling.
Existing reference material:
* [Requirement of Geo but not considered Production Ready](https://gitlab-com.gitlab.io/gl-infra/gitlab-dedicated/team/runbooks/geo.html#rollback)
* [Non-Goal for Zero Downtime Maintenance](https://gitlab-com.gitlab.io/gl-infra/gitlab-dedicated/team/architecture/blueprints/zero-downtime-upgrades.html#rollback)
There's a set of jobs in switchboard UAT that make reference to rolling back: https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/sandbox/switchboard_uat/-/blob/4685a9f12aff8596b384deb05f319be500d3995f/templates/switchboard.yml#L333-427
The pipeline has a slightly different setup for these jobs:
![Screenshot_2024-03-28_at_15.58.56](/uploads/e4cdbdcb5db444f49ffe8e1467102bf0/Screenshot_2024-03-28_at_15.58.56.png)
#### Overview
1. The tenant is failed over to its Geo counterpart
2. The previous sites' database is removed
3. The database is recreated as empty and configured to sync with the secondary site
4. The tenant is failed back over to the original Primary site
---
### Research
This method of rolling back strictly relies on the usage of Geo. For Cells, Geo is not currently planned for the initial iteration. Thus the current method attempting to be leveraged by Dedicated is insufficient for the needs of .com. While ultimately the desire is to ensure we leverage stable versions at all times, we will have at least a few Cells that are within the first couple of Rings of our Deployment mechanism for which this may not be the case. We'll need a method of returning a Cell back to a working state whenever we perform a rollback on our current Main Stage of Production.https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20110Database Migrations unsafe for auto-deploy2024-03-28T19:52:51ZJohn SkarbekDatabase Migrations unsafe for auto-deploy### Problem Statement
We recently had a database migration fail on one of our test sandboxes. The cause of the failure is due to a problem with the migration itself. This exposed an issue with the way the Dedicated tooling currently p...### Problem Statement
We recently had a database migration fail on one of our test sandboxes. The cause of the failure is due to a problem with the migration itself. This exposed an issue with the way the Dedicated tooling currently processes database migrations. There's a few issues that we must address prior to enabling auto-deploy into Cells:
* Database migrations need to be better timed - currently these are being executed after various upgrades which are non-compliant with the recommended upgrade strategy
* Nothing validates that migrations are applied - this is a blind strategy and we assume success with the caveat the job must succeed
An example of how this was detected in our auto-deployed Delivery Tenant: https://ops.gitlab.net/gitlab-com/delivery/cells-tissue/-/jobs/13309773
### Solutions
* [ ] Investigate
* [ ] Determine how we can ensure migrations are up-to-date for all deploymentshttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20109Create a monitor during a deployment2024-03-27T17:37:59ZJohn SkarbekCreate a monitor during a deployment### Problem Statement
We currently don't know how often a deploy impacts production from the perspective of the metrics we observe for deployment health. Typically SRE's already get alerted when things go wrong, but we don't have the s...### Problem Statement
We currently don't know how often a deploy impacts production from the perspective of the metrics we observe for deployment health. Typically SRE's already get alerted when things go wrong, but we don't have the same vantage point inside of an Auto-Deployment pipeline. Without this knowledge there are a few factors that prohibit us from simply automating hitting the play button:
* We don't know or have a way to pause or trigger a rollback
* We don't know when things go unhealthy and thus rely on the EOC to alert us
* No alerting to the current RM that something bad might have happened
### Consideration
Build a job that runs alongside a deployment. The job should start when a deploy to a target environment and stage begin and end after the last deployment procedure job has completed (normally this is the Kubernetes deployment). We already have sufficient operations in our pipeline when QA fails, so hopefully there's no reason it should run this long. The goal of this job will start by simply understanding what metrics to pull from and only alert us if things go unhealthy. Doing so will provide us some information:
* How often deployments go unhealthy - this may require fine tuning what and how we leverage our deployment metrics
* How often we get pinged - we don't want this to go to the EOC since we do not yet know how badly things could be impacted
Since this job is continuously running in the background, we would have the ability to add a variety of things inside of it, such as alerting us when a target metric starts looking bad, and in the future, initiating a stoppage of the current deployment and perhaps even use this to trigger auto-rollbacks when necessary.
### Exit Criterion
* [ ] Decide on implementation details
* [ ] Build it
* [ ] Test ithttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20107Design dashboard using pipeline and job duration metrics2024-03-27T10:22:07ZReuben PereiraDesign dashboard using pipeline and job duration metrics## Summary
We have metrics like `delivery_deployment_pipeline_duration_seconds` and `delivery_deployment_job_duration_seconds` which provide pipeline and job durations respectively, for all deployment related pipelines and jobs.
## Pro...## Summary
We have metrics like `delivery_deployment_pipeline_duration_seconds` and `delivery_deployment_job_duration_seconds` which provide pipeline and job durations respectively, for all deployment related pipelines and jobs.
## Proposal
Using the above metrics, design dashboards showing useful information like average duration, p80, p95 duration of pipelines and jobs.https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20105Draft: Release Environment - Notify the owner of the change when the deployme...2024-03-26T09:09:02ZDat TangDraft: Release Environment - Notify the owner of the change when the deployment pipeline failsThe idea here is to notify the owner of the change (i.e. merged MR) when a release environment deployment fails. For example: In the deployment Slack notification, a reply is added when the pipeline fails, tagging the assignee(s) of the ...The idea here is to notify the owner of the change (i.e. merged MR) when a release environment deployment fails. For example: In the deployment Slack notification, a reply is added when the pipeline fails, tagging the assignee(s) of the merged MR that triggers the deployment.
Questions:
* Definition of "owners": the MR assignee(s), the one who merges, or both?
### Exit Criteria
* [ ] The change's owners are informed
* [ ] Error logs are providedhttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20104Release Environment - Slack notification for QA result2024-03-22T17:18:59ZDat TangRelease Environment - Slack notification for QA resultThe result of QA tests should be returned to both Delivery and Test Platform teams, to speed up the feedback loop. Slack is our communication channel in this case.
### Exit Criteria
* [ ] Define what channels/audience to receive the n...The result of QA tests should be returned to both Delivery and Test Platform teams, to speed up the feedback loop. Slack is our communication channel in this case.
### Exit Criteria
* [ ] Define what channels/audience to receive the notification.
* [ ] This task also depends on https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20066+
* [ ] Slack notifications are sent when the QA pipeline succeeds/fails
* [ ] The notification has a link to the pipeline
* [ ] (Optional) The notification has a link to the failed job with the job namehttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20103Uploading release-tools traces is failing2024-03-22T16:01:41ZReuben PereiraUploading release-tools traces is failingUploading of Opentelemetry traces to GitLab's tracing collector is failing with the following error:
```
2024-03-22 14:51:40.110886 D ReleaseTools::GitlabOpsClient -- [HTTParty] [2024-03-22 14:51:40 +0000] 200 "GET https://ops.gitlab.ne...Uploading of Opentelemetry traces to GitLab's tracing collector is failing with the following error:
```
2024-03-22 14:51:40.110886 D ReleaseTools::GitlabOpsClient -- [HTTParty] [2024-03-22 14:51:40 +0000] 200 "GET https://ops.gitlab.net/api/v4/projects/gitlab-org%2Frelease%2Ftools/pipelines/3019169/bridges" 2
E, [2024-03-22T14:51:40.410577 #17] ERROR -- : OpenTelemetry error: OTLP exporter received rpc.Status{message=, details=[]} for uri=https://observe.gitlab.com/v3/9970/430285/ingest/traces
E, [2024-03-22T14:51:40.410642 #17] ERROR -- : OpenTelemetry error: Unable to export 6 spans
Failed to flush span processor
2024-03-22 14:51:40.410879 F Rake::Task -- Task failed -- Exception: SystemExit: exit
/builds/gitlab-org/release/tools/lib/tasks/trace.rake:36:in `abort'
/builds/gitlab-org/release/tools/lib/tasks/trace.rake:36:in `block (2 levels) in <top (required)>'
/usr/local/bundle/gems/rake-13.1.0/lib/rake/task.rb:279:in `block in execute'
```
Examples:
- https://ops.gitlab.net/gitlab-org/release/tools/-/jobs/13262304
- https://ops.gitlab.net/gitlab-org/release/tools/-/jobs/13261319https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20101[Discussion] Move Release Environment configurations out of GitLab repository2024-03-21T09:55:48ZDat Tang[Discussion] Move Release Environment configurations out of GitLab repositoryCurrently, many configurations of release environment live in https://gitlab.com/gitlab-org/gitlab repo, like:
* [The script to generate environment file for the deployment](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/con...Currently, many configurations of release environment live in https://gitlab.com/gitlab-org/gitlab repo, like:
* [The script to generate environment file for the deployment](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/construct-release-environments-versions.rb?ref_type=heads)
* [The pipeline to deploy and run QA on the release environments](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/release-environments/main.gitlab-ci.yml?ref_type=heads)
IMO, it makes a mix of concern between the GitLab repo - the Rail code, and the release-environments repo - the place where all actions related to release environments should happen. Ideally, the only mention of release-environments in the GitLab repo should be [the trigger in the pipeline](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/release-environments.gitlab-ci.yml?ref_type=heads). The rest of the logic should be done in the release-environments repo.
This change clear the areas of concern between the two repos, and avoid a situation when a change in release-environments requires a backport to the GitLab repo (to the three supported minor versions).
Let's discuss.https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20100Patch releases: Detect if no bugs or security fixes are available for a version.2024-03-20T17:34:07ZMayra CabreraPatch releases: Detect if no bugs or security fixes are available for a version.https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193 made patch releases to be SLO-driven, in consequence, patch releases are now more frequent, roughly every 2 weeks. With this frequent cadence, there might be a case where a versi...https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193 made patch releases to be SLO-driven, in consequence, patch releases are now more frequent, roughly every 2 weeks. With this frequent cadence, there might be a case where a version doesn't include any bug or security fixes. In this scenario, release-tools should be smart enough to detect there are no pending fixes for a specific version and skip the process for this version during the patch release.https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20099Drop the `new_patch_release_process`2024-03-20T17:01:59ZMayra CabreraDrop the `new_patch_release_process`The `new_patch_release_process` was added for the new patch release process behavior set on https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193.
After it has been tested the feature flag should be dropped
- [ ] Drop the feature ...The `new_patch_release_process` was added for the new patch release process behavior set on https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193.
After it has been tested the feature flag should be dropped
- [ ] Drop the feature flag from release-tools
- [ ] Drop the feature flag from opshttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20098Print out the running version on the environment during post-deploy migration...2024-03-22T15:29:29ZDat TangPrint out the running version on the environment during post-deploy migration (PDM)Currently, when a new PDM is triggered, two notification channels are used:
* Slack notification
![Screenshot 2024-03-20 at 17.37.48.png](/uploads/d6902f5ac2dd5858f874362aed5c6758/Screenshot_2024-03-20_at_17.37.48.png)
* GitLab comme...Currently, when a new PDM is triggered, two notification channels are used:
* Slack notification
![Screenshot 2024-03-20 at 17.37.48.png](/uploads/d6902f5ac2dd5858f874362aed5c6758/Screenshot_2024-03-20_at_17.37.48.png)
* GitLab comment on the Release issue ([screenshot in this comment)](https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20098#note_1823426484)
Both of them don't say which auto-deploy package version is running on the environment. Thus, it makes it hard when debugging and auditing (i.e. what version was running when the PDM happened).
### Exit Criteria
* [ ] The running auto-deploy package version on the production environment (i.e. gitlab.com) is shown on both the GitLab comment and the Slack message (e.g. 16.10.202403201400-ad47749e6cd.a68c5e051f9).
* [ ] (Optional) Link to the last commit of the package on security/gitlab.https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20097Release Environment - Security Hardening - Run the runner with limited permis...2024-03-22T09:05:46ZDat TangRelease Environment - Security Hardening - Run the runner with limited permissionAs a recommendation in https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955+, the runner using for release-environments should have limited privileges, by using security context and priv...As a recommendation in https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955+, the runner using for release-environments should have limited privileges, by using security context and privilege escalation security mechanisms. More details can be found in this [comment](https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955#note_1796524306 "Security Review Request: Release Environment").
### Implementation/Testing Hint
Thank to @ggillies :
> You can actually configure multiple runners in the same runner deployment, by using different headings in the [runners](https://gitlab.com/gitlab-com/gl-infra/release-environments/-/blob/main/cluster-applications/helmfile.yaml?ref_type=heads#L113) section. This means you can add a new runner alongside the current one (with a new tag), and try running jobs on it in an MR first, making sure they work, before cutting over
### Exit Criteria
* [ ] Define the list of the runner's required privileges
* [ ] The runner only runs with required privileges
* [ ] The deployment pipeline of release-environments works normallyhttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20096Release Environment - Security Hardening - Run IaC scans using gandalf2024-03-20T15:29:46ZDat TangRelease Environment - Security Hardening - Run IaC scans using gandalfAs a recommendation in https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955+, we should add an infra security scan using [gandalf](https://gitlab.com/gitlab-com/gl-security/security-oper...As a recommendation in https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955+, we should add an infra security scan using [gandalf](https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/gandalf). The details about the recommendation can be found [here](https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/bau/-/issues/2955#note_1796384013).
A MR is opened https://gitlab.com/gitlab-com/gl-infra/release-environments/-/merge_requests/173, but there is still an issue with the permissions to read the repository.
### Exit Criteria
* [ ] [gandalf](https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/gandalf) is integrated as a job in the [release-environments](https://gitlab.com/gitlab-com/gl-infra/release-environments) pipeline
* [ ] The [release-environments](https://gitlab.com/gitlab-com/gl-infra/release-environments) pipeline fails when the [gandalf](https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security/gandalf) scan failshttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20093Update patch release terminology2024-03-21T16:12:44ZMayra CabreraUpdate patch release terminologyAs part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193, patch releases are being repurposed to include bug and security fixes. This change requires updating the term used on the release tooling:
* Tracking issue: https://...As part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1193, patch releases are being repurposed to include bug and security fixes. This change requires updating the term used on the release tooling:
* Tracking issue: https://gitlab.com/gitlab-org/gitlab/-/issues/446238 -> It should be called `Patch release`
* release task issue: https://gitlab.com/gitlab-org/release/tasks/-/issues/8715 -> It should be called `Patch release`
* `#releases` Slack banner: ![Screenshot_2024-03-18_at_10.47.39](/uploads/fa994903cec305a993129a4707261582/Screenshot_2024-03-18_at_10.47.39.png) -> It should be called `Patch release`https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20092Prepare announcement issue for planned patch releases2024-03-28T13:48:21ZMayra CabreraPrepare announcement issue for planned patch releasesOn https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20015, patch releases were rebranded to include bug and security fixes. A blog post is being prepared https://gitlab.com/gitlab-com/marketing/brand-product-marketing/content-str...On https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20015, patch releases were rebranded to include bug and security fixes. A blog post is being prepared https://gitlab.com/gitlab-com/marketing/brand-product-marketing/content-strategy-and-ops/blog/-/issues/33 to communicate this for our customers. The purpose of this issue is to create an announcement issue to broadcast this change internally. The announcement
* Should give a brief overview of the release update.
* It should clarify the developer process doesn't change
* Should be shared with Engineers, AppSec, Support and ProdSec.Mayra CabreraMayra Cabrerahttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20091Determine Risk for Post Deployment/Background Migrations in relation to Relea...2024-03-26T15:01:20ZJohn SkarbekDetermine Risk for Post Deployment/Background Migrations in relation to Release Preparation## Problem Statement
We need clarification on where we may determine a breakpoint for when migrations are running and when said migration is safe for release.
#### Today
We have 3 types of database migrations:
* Regular migrations - ...## Problem Statement
We need clarification on where we may determine a breakpoint for when migrations are running and when said migration is safe for release.
#### Today
We have 3 types of database migrations:
* Regular migrations - usually schema changes or data modifications that are quick
* Post Deployment migrations - executed after a deployment due to application requirements
* Background migrations - Long running determined by the data and method of being modified
[Migration Style Guide](https://docs.gitlab.com/ee/development/migration_style_guide.html)
#### Risk Assessment
Currently we only validate that the post deployment migrations have been executed, we do not validate that migrations are _complete_.
This means if a long running migration were to be running at the time that Release Managers begin release procedures, a migration _may_ be active.
The primary question here is, _is there risk in this_ or perhaps another statement _what level of risk are we subject to_?
#### Questions
GitLab has a giant database.
It would be very common for large migrations of data to take a long time.
If checking that a PDM runs, we'll know if we harm the database as we'll end up causing some sort of incident.
Since we do not wait for lengthy background migrations to complete, I wonder if we are missing potential failure scenarios that our self managed users would be subject too,
#### Exit Criterion
* [ ] Assess risk - reach out to dev teams as necessary to help us determine an answer
* [ ] If we want to wait for the PDM to complete, we'll have a bit of work to do, create the necessary issues to ensure that we check the PDM up to the selected sha for release and validate that all migrations have completed.
* [ ] If we deem this risk low, document, and closehttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20090Documentation + Announcement plan for patch release information dashboard2024-03-15T22:06:40ZJenny KimDocumentation + Announcement plan for patch release information dashboard### Context
Similarly to https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19991 and https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19992, we should add documentation and announce the new addition about the patch releas...### Context
Similarly to https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19991 and https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19992, we should add documentation and announce the new addition about the patch release information to the "[delivery: release information](https://dashboards.gitlab.net/d/delivery-release_info/delivery3a-release-information?orgId=1)" dashboard.
For the monthly release information on the dashboard, we opened an announcement and feedback issue https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20072 so we should have one similar for the patch release information.
### Exit Criteria
* [ ] \[tbd\] documentation locations
* [ ] \[tbd\] announcement locationshttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20089Add patch release info to release information dashboard2024-03-15T22:06:40ZJenny KimAdd patch release info to release information dashboard### Context
Before starting this issue, we should have a PoC Grafana dashboard with panels that show patch release information from https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20088.
Very similarly to https://gitlab.com/gi...### Context
Before starting this issue, we should have a PoC Grafana dashboard with panels that show patch release information from https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20088.
Very similarly to https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19990, this issue is to transfer that PoC into [the runbook's release-info dashboard jsonnet file](https://gitlab.com/gitlab-com/runbooks/-/blob/master/dashboards/delivery/release_info.dashboard.jsonnet?ref_type=heads) using the json model of the PoC dashboard.
### Exit Criteria
* [ ] Patch release information is available in "[delivery:release information](https://dashboards.gitlab.net/d/delivery-release_info/delivery3a-release-information?orgId=1)" dashboard on Grafanahttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20088PoC for patch release information2024-03-15T22:05:50ZJenny KimPoC for patch release information### Context
Before starting this issue, `delivery_release_patch_status` metric should be created and be available on Grafana via https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20087.
Similarly to https://gitlab.com/gitlab-com...### Context
Before starting this issue, `delivery_release_patch_status` metric should be created and be available on Grafana via https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20087.
Similarly to https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19989, this issue is to create a PoC dashboard panels for the patch release information, in order to gather feedback and iterate towards the IaC version actually being available as part of the [release information dashboard](https://dashboards.gitlab.net/d/delivery-release_info/delivery3a-release-information?orgId=1) ( https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20089)
Very similarly to what we currently have for the monthly release information on the dashboard, the PoC dashboard panels should contain:
* Patch release information
* Patch release description + relative links ([from this list on the parent epic](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1255#information-to-display-on-grafana))
* Explanation about the current status of the patch release ([The status information is also on the list](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1255#information-to-display-on-grafana))
* Patch release versions
* Patch release date
* Patch release status
Once the PoC is available to view, share it with the ~"team::Delivery-Releases" and gather feedback.
### Exit Criteria
* [ ] Grafana PoC is available with the above information in panels
* [ ] Grafana PoC is shared with the team and feedback gatheredhttps://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/20086Release Environment - Drop feature flag release_environment2024-03-11T15:20:58ZDat TangRelease Environment - Drop feature flag release_environmentIn https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19580 via https://gitlab.com/gitlab-org/release-tools/-/merge_requests/2876, we automated the release environment creation. During the process, we added a new feature flag `rele...In https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/19580 via https://gitlab.com/gitlab-org/release-tools/-/merge_requests/2876, we automated the release environment creation. During the process, we added a new feature flag `release_environment` to be able to disable the new change quickly when something goes wrong.
After running the feature for some time, once we have the confidence that it is working well, we should drop the feature flag.
### Exit Criteria
* [ ] Drop FF `release_environment` from the codebase
* [ ] Drop the feature flag com ops