Refine k8 integration based on operators
Problem to solve
-
Administrators cannot control the catalogue of "managed apps" exposed to the end-user -
Developers cannot configure/parameterize the installation of "managed apps" -
Administrators have little control over the e2e behavior of the k8 integration -
Limited options around securing deployments -
Limited configurability of canary deploys, deploy boards, and continuous monitoring integrations outside nominal flow
-
-
Project deployment config/state is stored in GitLab rather than the cluster -
Provide a more seamless "cloud-native" experience - i.e. ideally, for any operation I can perform in the GitLab UI, there is a straightforward
kubectlequivalent - Likewise, when integrating an existing cluster, it's current state should be respected
- i.e. ideally, for any operation I can perform in the GitLab UI, there is a straightforward
-
Refocus Knative/serverless discussions around targeting the broader "service mesh" use-case - Knative is great for greenfield, but (IMO) not a good "lowest common denominator" solution
- Service Mesh allows us to enforce strong identity and service encapsulation through mTLS without opinionated builds/deploys
-
"Ownership" of deployed resources across multiple namespaces and/or clusters is not well defined -
"Deploy Boards" are only indirectly available to the end-user developer by scraping metrics, rather than querying k8 discovery endpoints directly
Intended users
Administrators, Operators, Developers
Further details
This issue is a followup from https://gitlab.com/gitlab-org/gitlab-ee/issues/10698#note_154475635 and https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26250#note_154472003 to better focus the discussion around the operator pattern and PaaS use-cases (&111 (closed)) specifically.
This proposal is actually much simpler than #10698 (closed) as it reduces the burden of state management and complex cluster lifecycle handling within GitLab, pushing most/all of it into the cluster.
Proposal
- Leverage
kubeplus(or a similar framework) for maintainingOperatorresources on both managed and "hybrid" clusters- Provide a clear, well-documented approach for administrators to build and expose custom operators to their end-developers
- As part of Auto DevOps instance templates, allow administrators to override default
Operators, i.e.:
# https://github.com/cloud-ark/kubeplus/blob/master/examples/multiple-operators/multiple-operators.yaml
apiVersion: operatorcontroller.kubeplus/v1
kind: Operator
metadata:
name: postgres-operator
spec:
name: postgres-operator
chartURL: https://github.com/cloud-ark/operatorcharts/blob/master/postgres-crd-v2-chart-0.0.2.tgz?raw=true
---
apiVersion: operatorcontroller.kubeplus/v1
kind: Operator
metadata:
name: moodle-operator
spec:
name: moodle-operator
chartURL: https://github.com/cloud-ark/operatorcharts/blob/master/moodle-operator-chart-0.0.1.tgz?raw=true
values:
HOST_IP: 192.168.99.100
---
apiVersion: operatorcontroller.kubeplus/v1
kind: Operator
metadata:
name: gitlab-operator
spec:
name: gitlab-operator
chartURL: https://gitlab.com/charts/gitlab-operator
values:
GITLAB_TOKEN: xpxVwtj3
- Operators represent "managed apps" the end-developer can install into their managed environment
- Use
kubeplusdiscovery endpoint to build the UI for installing managed apps (authenticated using OIDC -- below)
- Use
- Specify and implement a
GitLabDeploymentoperator for managing group/project cluster integration state for GitLab- Operator configuration would be similar to token-based runner registration?
- Open to discussion, but I think the basic requirements of a
GitLabDeploymentCRD would be:- CRD defined at the cluster level
- Creates and reserves a
namespaceresource - Associates one or more GitLab groups/projects with the
namespace - Maintains a service account within the
namespacefor each maintainer of associated projects (configured for SSO into the cluster via Dex)
- Using Dex, the GL operator will configure an OIDC integration with GitLab as the IdP enabling SSO into the cluster
- k8 deployments within Auto DevOps will be refactored to authenticate the CI user to the cluster using OIDC
- "Deploy Board" UI rewritten to use k8 discovery endpoints (api-server), authenticating with the cluster via OIDC on behalf of the end-user
Here's a basic POC example to give you an idea for what this CRD (and possible sub-resources) might look like:
apiVersion: gitlabdeploymentcontroller.gitlab.com/v1
kind: GitLabDeployment
metadata:
name: my-compute-environment
spec:
environment: production
namespace: my-compute-environment # default `metadata.name`
serviceAccount: {} # default name `metadata.name`
projects:
- my-team/project
groups:
- other-team
---
apiVersion: gitlabdeploymentcontroller.gitlab.com/v1
kind: GitLabMonitoring
metadata:
name: my-compute-environment-metrics
spec:
# somehow specify Prometheus manual integration
# configure monitoring associated with a particular GL deployment config
gitLabDeploymentRef:
name: my-compute-environment
What is the type of buyer?
Executive // Ultimate