Skip to content

Rails: Freeze the Kubernetes resources created for a devfile

MR: Pending

Description

With new and new changes, the chances of modifying the Kubernetes resources generated from a devfile are increasing. Any change is the Kubernetes resources would mean that the existing workspaces would restart with new Kubernetes resources configuration under various circumstances. In some cases, like in the Private repository Spike, the changes are such that the workspace is be in an error mode until the Full reconciliation occurs. To tackle this, we need to ensure 2 things -

  • Freeze the Kubernetes resources being generated for a devfile using some sort of versioning added to the devfile attributes before storing it in the database.
  • Add an option for in Rails <-> Agent communication such that Rails can inform Agent to do a full reconciliation immediately.

This issue tackle with the first suggestion. The second suggestion will be tackled in a separate issue.

Acceptance Criteria

  • Existing workspaces should not generate new Kubernetes resources
  • New workspaces should generate the new Kubernetes resources

Solution

  • Add a new column to the workspaces table - config_version which is an integer and NOT NULL.
  • For existing records, the value will be set to 1.
  • Maintain support for existing version of the workspace. (more details in #421898 (comment 1517123723) )
    • Rename DesiredConfigGenerator to DesiredConfigGeneratorPrev1.
    • Rename DevfileParser to DevfileParserPrev1.
  • Add support for new version of the workspace.
    • Add DesiredConfigGenerator with new desired logic
    • Add DevfileParser with new desired logic
  • As part of generating the workspace resources, make it conditional as follows -
    def self.config_to_apply(workspace:, update_type:, logger:)
        ...
        workspace_resources = case workspace.config_version
                                    when RemoteDevelopment::Workspaces::ConfigVersion::VERSION_2
                                      DesiredConfigGenerator.generate_desired_config(
                                        workspace: workspace,
                                        include_secrets: include_secrets,
                                        logger: logger
                                      )
                                    else
                                      DesiredConfigGeneratorPrev1.generate_desired_config(
                                        workspace: workspace,
                                        logger: logger
                                      )
                                    end
        ...
    end
  • Once all workspaces with old config version are Terminated on SaaS
    • Drop support for old config version of the workspace by removing DesiredConfigGeneratorPrev1 and DevfileParserPrev1
    • For smooth migration on Self-managed, update logic in reconcile module
      • Trigger a full reconciliation for all workspaces with old config version
        def self.config_to_apply(workspace:, update_type:, logger:)
        ...
        if update_type == FULL || (update_type == PARTIAL && workspace.actual_state == States::CREATION_REQUESTED)  || (workspace.config_version < RemoteDevelopment::Workspaces::ConfigVersion::VERSION_2)
          include_secrets = true
        end
        
        
        ...
      • Update the config version of the workspace to latest config version in the database

Other solutions considered

Solution 1

  • For any new workspaces being created, add an attribute to the devfile at top-level. The key will be workspaces.gitlab.com/config-generator-version. The value will be 2.0.0.
  • For existing workspaces, this attribute will be me missing and will be thus will default to 1.0.0 in the code.
  • While generating the Kubernetes resources, check this attribute and create resources accordingly.
  • In the future, when we need to further change the Kubernetes resources, we can bump this version while creating the new workspaces.
  • This way, there is clean and simple way of ensuring that the existing workspaces consistently generate the same Kubernetes resources and do not change with new changes that we push to Rails.
  • Semvars are used to keep the open for future use cases where we might want to push-through a patch update in the Kubernetes resources regardless of how it affects the user experience.

Solution 2

  • Add a new column to the workspaces table - config_to_apply_generator_version which is an integer and NOT NULL.
  • For existing records, the value will be set to 1.
  • As part of generating the desired config for the workspace, modify the condition as follows -
    CURRENT_CONFIG_TO_APPLY_GENERATOR_VERSION = 2
    
    def self.config_to_apply(workspace:, update_type:, logger:)
        ...
    
        if update_type == FULL ||
            (update_type == PARTIAL && workspace.actual_state == States::CREATION_REQUESTED) ||
            (workspace.config_to_apply_generator_version < CURRENT_CONFIG_TO_APPLY_GENERATOR_VERSION)
            include_secrets = true
        end
    
        ...
    end
  • Essentially, we are forcing a full reconciliation for those workspaces whose version is lesser than the current latest one.
  • Once the data is sent to the agent, the config_to_apply_generator_version will be set to CURRENT_CONFIG_TO_APPLY_GENERATOR_VERSION so that we do not always end up performing a full reconciliation for these workspaces.
  • The UX repercussion is that whenever we increment the value of CURRENT_CONFIG_TO_APPLY_GENERATOR_VERSION in the code, it will result in restarting of all the workspaces. This is manageable for self-managed GitLab instances and can be communicated in advance as part of release notes. However, for SaaS GitLab, this becomes problematic.
Edited by Vishal Tak