Move Kubernetes from service to Cluster page
Description
With %10.1 we have a CI/CD > Cluster page where we can create and see the Kubernetes cluster automatically created by the GKE integration.
We want to be able to specify manual parameters for an existing Kubernetes cluster, from the same page. This is exactly what we actually have in the Kubernetes integration page, so we can start by moving it in the Cluster page.
Proposal
We want to move the existing Kubernetes integration page in the new CI/CD > Cluster page.
When creating a cluster, the Kubernetes integration parameters are automatically filled in, so they override the existing ones. If you then change them, "auto" cluster will no longer be connected with GitLab, so we should make it clear to the user.
For example, we could have a page under CI/CD at the group level to list your clusters, and add new ones. When adding a new one, you could provide the credentials to an existing cluster, or create a new one on GKE. If you create multiple clusters, they'll be shown in a list.
Each cluster could have an environment (or wildcard pattern) to be accessible on. e.g. there could be a production
cluster for production[/*]
, and a dev
cluster *
(the rest). Projects could override which clusters they use (perhaps using the existing service, or a new clusters page there). Perhaps as an admin, I'd be able to see which projects are making using of which clusters. Perhaps as an admin I'd be able to assign clusters to specific projects, although we have no precedence for this kind of control. (Project-specific runners are assigned at the project-level regardless of group, for example).
For each cluster, we could make it easy to install runners, Prometheus, NGINX ingress, Helm tiller, etc. Everything you need to use Auto DevOps. Ideally with one click, but possibly a la carte. We should show the status of each of these services, and let you upgrade them easily as well. Or have them optionally auto-updated.
For each cluster, we could make it easy to monitor the cluster, see when the cluster needs to be scaled, and provide an easy way to scale it (if it was created on GKE).
There may be other admin functions as well. Perhaps we need to be able to clean up orphaned pods (although I'd rather that just not happen). I don't exactly want to replace the Kubernetes dashboard, but I do want to make it so that it's unnecessary to use the k8s dashboard, except for rare circumstances.
To do this right, we really should use the master creds to create secrets for each projects/environment rather than sharing the same creds. But as a first iteration, we can share the creds.
Proposal
- Add Clusters menu at the group, personal namespace, and project levels.
- Show list of clusters (with great no-data graphic).
- When looking at a project, I should be able to see group clusters too
- Ideally when looking at a group, I can see all the project clusters (that I have access to).
- Button to add existing k8s cluster credentials.
- Button to create cluster on GKE (via OAuth). (Or single dialog with both choices.)
- For either existing or new cluster, provide an environment pattern for each cluster, much like environment-specific variables
- Cluster variables will only be sent to deploy jobs with matching environments
- If there are multiple clusters that match a given environment, the most-specific one wins. e.g.
production/eu
wins overproduction/*
- Indicate if a cluster is attached to protected environment(s). Restrict access appropriately. (Don't let Devs see details or create/edit them. Seeing the existence of them is OK.)
- For either existing or new cluster offer to:
- Install Helm tiller (but required for all the following options): #36629 (closed)
- Install NGINX-ingress with Let's Encrypt
- Install Runner: #32831 (closed)
- Install Prometheus: #28916 (closed)
- Declare base domain(s) for ingress on cluster (needed by Auto Deploy)
- For GKE, allow automatically configuring Google DNS for base domain to ingress IP
- Hopefully we don't need to worry about "protected clusters" if we have "protected environments". Clusters attached to protected environments will automatically be protected. This may be more complicated if we let Developers view/edit cluster information.
Questions
- Should we have this available for personal namespaces as well?
- Proposal only includes group-level, does that mean we won't let people create clusters at the project level anymore? I suspect we'll still need to, so we can either leave the existing service integration, or make a Clusters menu at the project level as well.
- If we offer Clusters at both group and project level, we should have visibility across both. i.e. when looking at a project, I should be able to see group clusters too, and ideally when looking at a group, I can see all the project clusters (that I have access to).
- Should we provide DNS for them? e.g.
*.$USER_OR_GROUP_PATH_SLUG.gitlab-apps.com
Then providing their own DNS would be an option, but not required.
Design
Selecting a method
When a user (with Master permissions) first visits the Cluster integration page, they are asked how they would like to set up Cluster integration in their project.
Choosing GKE
Once the user chooses an option, the UI for the next step is loaded. As discussed with @filipa, it's better to do this inline, but it's okay to requiere a page redirect.
A dropdown at the top of the page still lets the user change the creation method
If the user has not logged in with Google yet, the current message is displayed instead of the form
If OAuth login has not been set up, we show the same message that we show in the current iteration
Once log in succeeds, we show the user the form asking for the GKE cluster details
Sign in | GKE form |
---|---|
![]() |
![]() |
Choosing existing cluster
If the user chooses to use an existing cluster, the current Kubernetes form is shown.
In this form, the confirmation button says Add cluster
instead of Create
View page
Since we are adding more information and options to the view page, we need to be able to expand and collapse the different sections so the user is not overwhelmed. We can use the same model that we have for the Project Settings page.
The default state of this page is the same for GKE and existing clusters
The Enable cluster integration
section should not be collapsible, since it is important to be able to determine that information at a glance.
The Applications
section (to be introduced with #38464 (closed)), will not be collapsible either because we want to promote usage of this new feature.
If a cluster is created on GKE, status banners need to be shown. Since the GKE
section is no longer first-level, we move the banners under the Enable cluster integration
section:
Creating | Success | Error |
---|---|---|
![]() |
![]() |
![]() |
Cluster details
This section shows the details for the cluster and allows the user to change them.
If the cluster was created through GKE, the only editable field is Project namespace
.
If the cluster was added manually, all fields are editable.
The Token
field should be masked for both forms.
GKE cluster | Existing cluster |
---|---|
![]() |
![]() |
Advanced settings
The last section allows the user to remove cluster integration.
If the cluster was created on Google, there's also a sub-section with a link to GKE for managing the cluster.
GKE cluster | Existing cluster |
---|---|
![]() |
![]() |
Users without permissions
Users with permissions less than Master
simply do not see the Cluster
option in the navigation. If they navigate to one of the pages via URL they should see a 404
.
Links / references
- Monitoring of GKE cluster: https://gitlab.com/gitlab-org/gitlab-ce/issues/27890
Documentation blurb
Overview
What is it? Why should someone use this feature? What is the underlying (business) problem? How do you use this feature?
Use cases
Who is this for? Provide one or more use cases.
Feature checklist
Make sure these are completed before closing the issue, with a link to the relevant commit.
- Feature assurance
- Documentation
- Added to features.yml