I don't want to use Auto DevOps. But the only documentation I can find about using Kubernetes with GitLab CI/CD (IE: Actually deploying my project to a k8s cluster) involves Auto DevOps. This is disappointing.
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
@mikelewis This is exactly the issue I was referring to in one of our recent meetings. Auto DevOps is not so straightforward; having to know GKE + k8s is kinda overwhelming for a feature that should work out-of-the-box.
Besides improving the docs we should really look into improving the feature. ¯\_(ツ)_/¯
@eread, I can't speak for @tstivers, but here are some of our notes:
The docs for actually making the connection to EKS (I'll focus on EKS as that's where we are) were great. Super easy to follow and reproduce.
The EKS docs are out of sync with the main k8s docs you linked as they main k8s docs are now relevant for RBAC, but the EKS docs don't mention this
The docs want us to install tiller, an ingress controller, cert manager, and prometheus. Any k8s cluster that wasn't created yesterday will almost certainly already have these components installed and even your own docs mention the concerns and confusion that can arise by duplicating them
Frankly, in terms of our own actual following of the provided docs, that's where we stopped. There was not enough here to assure us that continuing wasn't going to completely bork our cluster. We have no idea what ingress class name is being used by the gitlab installed ingress controller so it could very well conflict with our installed one. What happens with two tiller's installed? Cert-manager, same question?
Maybe if there were some passage in the docs providing some assurance that there wouldn't be conflicts, we'd have given it a try.
Despite not actually continuing beyond this point, we did examine the rest of the docs for viability, but there really wasn't anything about how to actually USE (i.e. deploy our apps in CI) the attached k8s cluster except auto-devops, which is, I think, the point @tstivers was making. If we're not going to use auto-devops, is there any way to leverage an attached kubernetes cluster with the provided benefits (monitoring and deploy boards being the big ones)? I.e. how do I deploy my application to a configured k8s cluster if I don't use auto-devops? Or is that completely missing the point? Are the attached k8s cluster there ONLY for the purpose of auto-devops?
Finally, we were (are?) seriously considering setting up a cluster JUST for Gitlab to connect to and using only the Gitlab installed control and monitoring applications, which is fine, but that doesn't really solve the problem long term because eventually, we'd need to deploy to production and we're back at square one of having to figure out how to do that within the context of Gitlab's attached k8s clusters or just giving in and stepping outside the context of Gitlab's known k8s clusters.
@MWatter thanks for this awesome write up. It's really great feedback and as someone so close to the code it's really easy for me to miss the fact that some of this stuff isn't clearly outlined in our features or in our docs so we appreciate this kind of feedback a lot.
Let me provide some answers and some links but it's definitely clear from this we should update docs even if I can answer your questions/concerns now.
First of all the detail that may help the most to know is that you don't need to install any of the Gitlab managed apps via our UI (ie. Tiller, Ingress, Cert Manager,...). You can just add a cluster and be done with it. At the very least this will mean that all of your CI jobs are getting passed a $KUBECONFIG CI variable that they can use for running kubectl commands. Similarly doing this does not require using Auto DevOps at all. Auto DevOps is just a pre-built GitLab CI template and you can write your own .gitlab-ci.yml and this can allow you to do deployments however suits you best.
We do at least have one guide we published a while ago about deploying java apps to K8s using GitLab CI https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/. While this may not be exactly what you want it should be illustrative of how to interact with Kubernetes in .gitlab-ci.yml. A quick Google did find a lot of other published blog posts and things too. I don't want to downplay your criticism of our docs here and I do think you're right that we should improve them but this may help unblock you for the moment.
This should be fine as long as your Tiller is not installed in the gitlab-managed-apps namespace where we install our Tiller.
Cert-manager, same question?
I actually don't know about this one. Presumably this will cause problems unless cert-manager has some way of deploying them independently (like the class name thing) because otherwise multiple things will be trying to create certificates.
Hopefully that answered all the questions you posed but let me know if you've got any more and we can use this as a starting point for improving our docs.
@eread My biggest complaint is there's just no indication that Kubernetes integration can be used at all without Auto DevOps. So I'd mostly like to see documentation just as comprehensive as the Auto DevOps documentation, that outlines how to do things like deploying to a Kubernetes cluster without using Auto DevOps.
@eread I believe simply adding a section to the docs to the effect of Deploying your application to kubernetes which walks the user through how to setup up gitlab-ci.yml with a simple build, test, deploy workflow is a great start. Perhaps adding a note regarding Auto DevOps to that section as well. We can use one of our existing applications such as the minimal ruby app.
@ccasella it would also be great and a huge help if we could write a blog post about how you deployed your personal site to kubernetes without Auto DevOps. We could get marketing help here as well.
In the absence of a full blog, @ccasella, could you sketch out some steps for me that I could turn into documentation? You know much more about this than me!
@eread that's mostly correct with some clarifications/answers
Using ${KUBE_INGRESS_BASE_DOMAIN} to talk to the Cluster, and using kubectl to do the deploy using the configuration
Technically KUBE_INGRESS_BASE_DOMAIN is just a configuration option that the scripts use to figure out what URL the deployed app will have. It's the resulting base domain of the app you are deploying and not the URL of the cluster. The cluster URL (and other details) are actually in a variable called KUBECONFIG which is implicitly detected by kubectl so doesn't need to be explicitly referenced.
The helm-install-image is used as a base image to run the deployment script. The image being built in the Build step is your app's docker image. So there are 2 images. Your app that will be deployed to the cluster (created in Build). An image that is really just a utility used during deploy stage to get access to the kubectl binary used to communicate with the cluster.
What else would I have to do if I didn't enable the GitLab Kubernetes integration?
You would need to add specific CI variables used by kubectl (in particular KUBECONFIG). You would also need to manually install your own Ingress to route traffic to your application (and possibly provision your own SSL certs if you wanted HTTPS). Even if you did this then you wouldn't get any of the following handled automatically:
Separate namespaces and service accounts created automatically for your CI jobs which means projects and environments (about to be merged in https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/30711) can be isolated from each other by default. You could implement this yourself but it would mean manually creating all the namespaces ahead of time and configuring a separate KUBECONFIG CI variable for every single environment.
GitLab is moving all development for both GitLab Community Edition
and Enterprise Edition into a single codebase. The current
gitlab-ce repository will become a read-only mirror, without any
proprietary code. All development is moved to the current
gitlab-ee repository, which we will rename to just gitlab in the
coming weeks. As part of this migration, issues will be moved to the
current gitlab-ee project.
If you have any questions about all of this, please ask them in our
dedicated FAQ issue.
Using "gitlab" and "gitlab-ce" would be confusing, so we decided to
rename gitlab-ce to gitlab-foss to make the purpose of this FOSS
repository more clear
I created a merge requests for CE, and this got closed. What do I
need to do?
Everything in the ee/ directory is proprietary. Everything else is
free and open source software. If your merge request does not change
anything in the ee/ directory, the process of contributing changes
is the same as when using the gitlab-ce repository.
Will you accept merge requests on the gitlab-ce/gitlab-foss project
after it has been renamed?
No. Merge requests submitted to this project will be closed automatically.
Will I still be able to view old issues and merge requests in
gitlab-ce/gitlab-foss?
Yes.
How will this affect users of GitLab CE using Omnibus?
No changes will be necessary, as the packages built remain the same.
How will this affect users of GitLab CE that build from source?
Once the project has been renamed, you will need to change your Git
remotes to use this new URL. GitLab will take care of redirecting Git
operations so there is no hard deadline, but we recommend doing this
as soon as the projects have been renamed.
Where can I see a timeline of the remaining steps?