Group-sharable deployment pipelines
I've been mulling this around in my head for a while, and we have a few issues on the topic, but I've finally managed to do a little exploring.
Video Demo
https://drive.google.com/file/d/1TPBGF7QDfTK389ITvgx4bG09aFFp7bp7/view?usp=sharing
Basic Idea
I think a lot of the times if you have multiple projects, you're deploying it all to some common set of infrastructure. This could be:
- a k8s cluster
- a heroku organization
- a fly.io organization
- an AWS lambda organization
- a single/set of VPSs
- a single/set of physical machines in a room
This common set of infrastructure generally involves a common set of configuration, whether that's an auth token, namespace names, ingress configuration, SSH keys, or other elements. There might also be similar commands, such as in heroku (git push
), fly (flyctl deploy
), or k8s (kubectl apply
).
We should be able to create a runbook/function/shared pipeline that has the ability to take inputs to allow for the execution of the above. In my video demo, I accomplish this by using multi-project pipelines (with strategy depend
), environment variables, and some handy script utilities to do some simple value replacement. In larger examples, we can build project specific configuration as a job artifact, and use needs:pipeline
to download those artifacts and do a final merging.
This is, however, kind of annoying to set up and has some major limitations. The limitations I uncovered in my exploration are:
- redeploys and rollbacks no longer work;
- and dynamic environments (including review apps) are challenging to set up.
As trigger
and environment
in gitlab CI yaml files are incompatible, there is a second job that creates the deployment. This means that when redeploying/rolling back, we only re-run the deployment job, which doesn't actually do the deploying. The simple answer here is that we allow the combination of the two, and consider the entire second pipeline to be the deployment job.
There's also no (easy) way to have results of a child pipeline return to the parent in multiple projects. This means that we can't create a dotenv
artifact to help create dynamic environments. Here, we either need the ability to have a "return" value, or perhaps allow the child pipeline's dotenv
artifact affect the original job and combine it with the above combining of trigger
and environment
.
I've been thinking, though, of a couple other solutions that have some potential benefits beyond just allowing the above. They are either:
- group-level environments, or
- cross-project deployments.
Group level environments
A group level environment would be a group-level entity that contains a snippet of gitlab-ci yaml that shares the common elements of deploying an application to the environment. In my demo, this is captured by the infrastructure
project. Having a first-class group-level environment, however, would allow for the easier understanding of where a deployment is deployed. If, in a micro-service architecture, each micro-service is deployed to a group level environment, the group-level environment easily captures what version of each service is deployed, and naturally creates an interface showing that. It restricts users to predefined environment names that may be set in place by operators to enforce consistency. It can also be expanded on to be a group-level dynamic environment. Consider not just deploying to the group-level production
environment, but deploying to the group level review app environment. This would be able to utilize the same shared deployment code across all review apps, and perhaps allow users to easily spin up a review app for their micro-service that deploys all the other micro-services simply.
This is the big solution. This enforces the link that a single environment is some defined set of infrastructure to deploy to, shared across an organization. It could be utilized in the CI yaml as follows:
deploy-job:
deploy: # indicates deploying to a group level environment
to: production
inputs:
app_name: hello
docker_image: hello:latest
Cross-project deployments
The other option. This alters the ownership of the current environment and deployment entities to allow a project's deployments to belong to another project's environments.
It captures pretty much all the same functionality as the group level environment, but has the extra benefit of being able to contain code.
This gives it the unique ability to utilize infrastructure as code as part if it's pipeline, where the infrastructure project spins up, e.g., a new k8s namespace or AWS instance for a given environment name. Every time the project's main pipeline runs, the infrastructure can be updated/changed/rebuilt for the environment. It also enforces a link between infrastructure and environments, but brings along all the other project goodies of issues, MRs, incidents, etc. Its YAML would look similar, but also contain a project
key to specify which project's environment we are deploying to.
In both cases, a project's Environments page should show all of a project's deployments, and the environments they are deployed to, regardless of which group or project owns the deployment.