Simplify freezing of Kubernetes resources for workspaces to avoid inadvertent workspace restart issues
### Problem As we add new features, we have to modify the Kubernetes resources. Changes in certain scenarios result in restart of the workspaces from the end user perspective. To prevent this, we freeze the Kubernetes resources during reconciliation. This is achieved by creating new versions of desired_config_generator and devfile_parser every time we want to make such a change. The ceremony around this is cumbersome and we want to simplify it. In https://gitlab.com/gitlab-org/gitlab/-/issues/534808+ we discussed this issue and came up with a plan. This epic is to execute the said [plan](https://gitlab.com/gitlab-org/gitlab/-/issues/534808#note_2446366447). ### Solution To address this problem holistically where we do not have to worry about whether a change will result in the restart of a workspace thus disrupting the user, the following is proposed - Instead of freezing the Kubernetes resources during reconciliation where we introduce a new desired_config_version for every such change, we should move the freezing of the Kubernetes resources during workspace creation time. Thus, all\* Kubernetes resources would remain the same for a workspace for its entire lifetime and any new changes we make will only affect new workspaces. To achieve this, we should do the following - Introduce a new DB table - `workspace_agentk_metadata`. The name contains `agentk` to denote that the information contained in this table are used when using agentk. This sets up the stage where we will introduce other executors for Workspaces(e.g. Workspaces on CI Runners). This table will have only 1 field `desired_config` which is a `jsonb`. - Move all the logic of `devfile_parser` and most of the logic of `desired_config_generator` files from reconciliation to creation where they will generate the Kubernetes resources and it will be stored in the above table. Please note, the Kubernetes resources generated here will generate the Kubernetes Secrets but with empty data(read below). - During reconciliation, we will filter which resources to send based on the below logic - When it is a partial/full reconciliation and desired state is terminated, send the core inventory configmap and the secrets inventory config map. - When it is a partial reconciliation and the desired state is stopped/running, send the core inventory configmap, deployment, service, service account, network policy. - When it is a full reconciliation and the desired state is stopped/running, send the core inventory configmap, deployment, service, service account, network policy, secrets inventory config map, resource quota, secrets(which are to be mounted as environment variables), secrets(which are to be mounted as files) - The Kubernetes secrets will have their data filled during reconciliation. - Drop the `desired_config_genetor_version` column in \`workspaces table. This way, this entire overhead of creating new versions of desired_config_generator and devfile_parser every time we make some change would go away and all the ceremony around it as well. Eventually, we should also move certain columns from the `workspaces` table to `workspace_agentk_metadata` table because they are essentially an implementation detail and as we add new executors(Workspaces on CI Runners), it does not make sense to have these columns in the `workspaces` table. The columns that are being referred to are - namespace - responded_to_agent_at - deployment_resource_version - force_include_all_resources - workspaces_agent_config_version - cluster_agent_id We'll also have to think about whether `desired_state`/`actual_state`/`desired_state_updated_at` make sense in a non-agentk world. But all of these are followup items and we need not worry about it right now. We should also make sure some of these fields are not exposed to the user as GraphQL because they are essentially implementation details. Again, nothing to worry about now. But adding it here for completeness. ## Implementation Details 1. https://gitlab.com/gitlab-org/gitlab/-/issues/538157+s 2. https://gitlab.com/gitlab-org/gitlab/-/issues/538163+s 1. Create value object for `desired_config` https://gitlab.com/gitlab-org/gitlab/-/merge_requests/192099+s 2. [Remove versioned call of `DesiredConfigGenerator` from `response_payload_builder.rb`.](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/194148) 3. Move all files that will be referenced by the new `Create::DesiredConfig::DesiredConfigGenerator` class. 4. Create classes that will call the new logic to create desired config. 5. Call both old and new logic to create desired config and log the diff. 3. https://gitlab.com/gitlab-org/gitlab/-/issues/538360+s 4. https://gitlab.com/gitlab-org/gitlab/-/issues/541907+s 5. Clean up the redundant code and fields. 1. https://gitlab.com/gitlab-org/gitlab/-/issues/538166+s 2. Remove logic that creates `desired_config` in reconciliation phase ### Additonal MRs * Add validation to devfile * [Add validation to desired_config (and by extension config_to_apply).](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/192099) * This involves schema related validation as well as the order of the Kubernetes objects. * Remove the obsolete logic- https://gitlab.com/gitlab-org/gitlab/-/issues/538166+ --- I have asked in #database about the dependency on devfile-gem for our migration. [Here](https://gitlab.slack.com/archives/C3NBYFJ6N/p1746484048066479) is the slack thread.
epic