Filter agents when generating KUBECONFIG variable for CI build
What does this MR do and why?
Issue: #343885 (closed)
We need a way for users to restrict CI access to Kubernetes agents to selected environments. This can be done by adding environments in the groups and projects ci_access in the agent config saved in a project's repository. Through kas, the ci_access.projects.environments OR ci_access.groups.environments configurations are cached/persisted in the database. (See MR: gitlab-org/cluster-integration/gitlab-agent!793 (merged))
In this MR, we ensure that the agents' ci_access.<groups|projects>.environments config is considered when generating the KUBECONFIG for a CI build.
This change is in addition to !105298 (merged), which updates the GET /job/allowed_agents to filter the result according to the agents' configured environments.
The following conditions apply:
| Agent has environments config? | Job has environment? | Job & Agent config matches? | Included in KUBECONFIG? |
|---|---|---|---|
| No | No | N/A | Yes |
| No | Yes | N/A | Yes |
| Yes | Yes | Yes | Yes |
| Yes | No | No | No |
| Yes | Yes | No | No |
Release/deployment considerations
This will not work end-to-end in production unless gitlab-org/cluster-integration/gitlab-agent!793 (merged) is deployed as part of the GitLab Agent's monthly release. Without that GitLab Agent change, the agent's environments configuration won't be persisted in the rails database, which means that the agents won't be filtered by the environment when generating a CI build's KUBECONFIG. This is the current behaviour, so there is no need to put this behind a feature flag while waiting for gitlab-org/cluster-integration/gitlab-agent!793 (merged) to be deployed.
Screenshots or screen recordings
These tests match the conditions listed in the truth table above. The test job has no configured environment, while the deploy job is configured for production.
In the failure scenarios above, the CI build can't use the Kubernetes context defined by the agent. This is different to the behavior in !105298 (merged), where the CI build is still able to switch to the agent's Kubernetes context, but unable to access the objects in that context.
The following screen shots show the failure behavior if !105298 (merged) is implemented, but this current change is not. The jobs still succeed at use-context, but fail at get pods
Job without environment
Job with environment
How to set up and validate locally
-
Make sure your GitLab Agent Server (aka
gitlab-k8s-agent, akakas) is update to date withmasterand is enabled and running -
Create the following projects in your local GitLab instance.
- a project to setup the Kubernetes cluster connection, let's call this the
agentk-setup - a project to test the CI/CD pipeline access to the Kubernetes cluster, let's call this the
ci-k8s-access
- a project to setup the Kubernetes cluster connection, let's call this the
-
In the
agentk-setupproject:- Register an agent with GitLab
- Install the agent to a Kubernetes cluster. See instructions for Installing the agent in the cluster and Deploy the GitLab Agent (agentk) with k3d
-
Modify the environments access in the agent configuration file in the
agentk-setupproject to allowci-k8s-accessproject to access the Kubernetes cluster:ci_access: projects: - id: ci-k8s-access environments: - dev - test - staging - review/* - productionHere is an example of what it looks like on the Web IDE:
-
In the
ci-k8s-accessproject, setup jobs through the CI pipeline editor. Test access to the Kubernetes agent installed in the previous steps from different jobs with different environments. E.g.:
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.











