"Enterprise-managed" Kubernetes cluster integration
Problem to solve
As an amendment to the proposal outlined in https://gitlab.com/gitlab-org/gitlab-ce/issues/56557:
At cluster creation time offer two options:
- Allow GitLab to manage namespace and service accounts for this cluster
- A dedicated namespace and service account will be created for each project. These will be isolated from each other.
- I will manage cluster credentials, namespaces, and service accounts manually
- Cluster credentials provided at create time will be used cluster-wide, no namespaces or service accounts will be created.
... we would like to propose a 3rd use-case:
- Allow
<pipeline>
to manage namespace, service accounts, and managed applications for this cluster- Your enterprise's custom instance-wide pipeline will be triggered in order to provision a namespace and service account for each project.
Intended users
Administrators, Operators
Further details
Although group-level administration of Kubernetes cluster integrations has helped, the story is still lacking for platform operators and organizations that would like to enforce consistency across how clusters are provisioned, while still allowing individual teams to create these resources. Enterprises often have specific application on-boarding requirements and custom integrations that need to take place prior to configuring deployments.
Proposal
In order to give enterprises full control over the behavior of the GitLab Kubernetes integration, it is necessary to define arbitrary logic at various stages of the cluster lifecycle.
Cluster-Lifecycle Spec
First, we should formalize a definition of "cluster lifecycle" and what events would trigger the user-defined pipeline. At minimum, I think this includes the following:
-
cluster:create
: cluster is created (via GKE integration) [nice-to-have] -
cluster:add
: cluster integration (existing or new) is added at the group level -
cluster:remove
: cluster integration is removed from the group level -
cluster:destroy
: cluster is destroyed (not currently possible within GL?) [nice-to-have] -
project:add
: cluster integration is added at the project level -
project:remove
: cluster integration is removed at the project level -
managed-app:install
: user requests to provision a "GitLab managed app" on the cluster -
managed-app:uninstall
: user requests to uninstall a managed app from the cluster
Note: in the case of a group-level cluster integration being removed, a project:remove
event would be triggered for each project associated with the cluster prior to the global cluster:remove
event.
Instance-wide "k8 Provisioner" Pipelines
In the same way administrators define instance-wide AutoDevOps configuration via the instance template repository, it should also be possible to define handlers for the various cluster integration lifecycle events.
For this particular use-case, the ability to override the behavior of cluster lifecycle hooks at the project-level is not desirable. The purpose is to enforce consistency in a (potentially) multi-tenant cluster, so custom logic at the project-level is very dangerous. Thus, I think this would be best represented as a reference to an external project. This project's pipeline will be remotely triggered as a multi-project pipeline for each event in the lifecycle.
IIRC it is now possible to re-trigger builds when AutoDevOps configuration changes. This could now be more cleanly incorporated into the pipeline graph as the instance-wide "k8 provisioner" pipeline would be the source of the configuration change kicking off the build.
Implications for (1) GitLab-managed Clusters
Just like there is nothing special about the AutoDevOps configuration GitLab ships with, I think the existing logic for setting up a cluster integration should be reimplemented as an external project pipeline (based on kubectl
). This project would ship with all GL installations and be referenced in the default instance-wide templates (acting as a reference implementation for users wanting to customize the behavior).
By extracting this behavior out of the Rails code and into simple kubectl
and helm
operations k8 users are familiar with, I think this would provide better ongoing transparency and opportunity for external contributions related to the k8 integration(s).
Implications for Managed Apps
I will largely leave this aspect open for discussion, but the important takeaway is the potential for enterprises to fully customize the de/provisioning of managed applications on user cluster(s) (related https://gitlab.com/gitlab-org/gitlab-ee/issues/7983) and even the catalog of available helm chart sources applications can be installed from.
Permissions and Security
As proposed, the logic for handling cluster integration lifecycle events would be defined in shared project. Triggering this project's pipeline externally prevents this logic from being overridden or modified at the project level.
What is the type of buyer?
Executive // Ultimate