Kubernetes as data store
Problem to solve
We talk about making GitLab into a PaaS, but with a significant difference where if you hit a barrier, you can drop down to Kubernetes and do whatever you want (in contrast to Heroku). But, in order to make this a reality, we need to embrace Kubernetes-native ways of doing things.
For example, today, to scale your application, you need to change a project variable and then re-run the deployment job. This works well, but if you do happen to scale your application using native Kubernetes constructs, things go badly. What can we do to make this work? What if our scale values were stored in Kubernetes directly and detected from Kubernetes as the SSOT data store rather than our own variables.
Other areas might be CI and security tests. We don't let you deploy something unless the CI/CD pipeline succeeds, but technically, once the image is built, someone could deploy that manually to Kubernetes. This shouldn't be allowed. By moving our validations to Kubernetes native, perhaps using Binary Authorizations, perhaps using Grafeas, we could be confident that bad things can't happen, and thus be more confident with people using both Kubernetes and GitLab controls.
Another area of concern is simply the record of deployments. Currently, we track when deployment jobs are run, but we don't know for a fact what is actually running. If someone uses
kubectl to change an image, it's not tracked by GitLab. Perhaps application meta data, stored in Kubernetes itself, would help here. If you used Helm to deploy an app manually, it would update the meta data and GitLab would inspect that data to know what to display in the deployment history.
What does success look like, and how can we measure that?
(If no way to measure success, link to an issue that will implement a way to measure this)