Auto DevOps - How it works and current problems
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Goal
This issue is attempting to explain the Auto DevOps current way of way working and its shortcomings. My goal is to understand what is not working well in the current implementation so that we can take into consideration while re-imagining Auto DevOps.
Auto DevOps walkthroughs
The content is based on two videos from GitLab's Solution Architects (internal only) Peeling Back The Layers of AutoDevOps, AutoDevOps with and without Kubernetes and the feedback of Product managers and Product designers in the issue Insights and Research for Auto DevOps from all Stages involved.
What is Auto DevOps and how does it work (with and without Kubernetes) (from @francispotter)
- It’s a library of CI configurations.
- It’s a library of Docker images used by the CI configurations.
- For local instances, the images have to be downloaded locally in order to be used.
- You can include parts of the Auto-DevOps CI templates (no need to maintain in this case) or you can copy and re-write whatever parts of the Auto DevOps configuration you need. (Check out Customizing Auto DevOps documentation)
- Two of the main components that are important to understand are Auto-built and Auto-deploy. They are separate and can be used one without the other.
-
Auto-built always builds a Docker image - it’s mainly about Docker. It uses 3 main approaches:
- If there is an existing Docker file in the project, Auto-build will use the file, do a Docker built and push to the registry.
- If you don’t have a docker file, then Auto built will try to use Herokuish builtpacks.
- What Heroku does is to describe what it’s looking for in your code to determine which built pack to use.
- You can override that by putting a .builtpacks in your directory.
- You can also use Cloud Native Builtpacks, which are the newer replacement for Heroku (open source).
- You have to set a variable to use Cloud native builtpacks.
- Potential problem: Is it visible that the user has to set the variable in this flow?
-
Auto-test is about your Unit tests, Integration tests etc. It’s the least developed part of Auto DevOps, which is weird as Auto DevOps is mainly about testing.
- It makes broader guesses about what’s in your code and what tests to run.
- Problem: Usually gets it wrong, works mainly for Ruby - You can override the script block to get it to work.
- Once you have done your built, all the security scanners will run.
- SAST scanners are actually analyzing code and even though these scanners run in a test stage after the build stage they don’t actually rely on the build stage (they don’t rely on the docker container). Which means SAST jobs can be moved to start a bit earlier to improve performance in your pipeline.
-
Auto-deploy is the main deployment mechanism.
- It deploys to Kubernetes only.
- Auto DevOps can handle deployments to other targets like EC2 and Fargate but it doesn’t use the Auto-Deploy job, it uses AWS command line tools that are called directly from the jobs.
- It contains a helm chart within the Auto-deploy image but the Helm chart is quite hidden from the user. Also uses Tiller - you don’t need to touch either of them but you need to install them.
- Works for Staging and Production deployments, Review apps and Auto Monitoring. It’s the same job with different configurations.
- It provisions PostgreSQL by default. You will need to override it if you want to launch your own database.
- Problem: You might not want to use PostgreSQL.
- It runs database migrations for doing updates to your schema in your database.
- You can override the command if it’s getting it wrong.
- It expects the application to publish on port 5000.
- If you are using Herokuish this will be set up automatically.
- If you are using your own Docker image you will need to make sure to configure it, so it can be picked up by the load balancer and the traffic can be correctly routed.
- Problem: This is not clear in the flow so if you are using your own Docker image you might miss it. The error messaging is not clear either so you have to take a long journey to troubleshoot.
- You can override many of the above by customising variables, 50-100 variables that you can override (e.g. don’t want PostgreSQL or add your own command for database migration).
- Question: Is it clear which variables you can override?
- Since Auto-deploy uses Helm to deploy your app, you might want to configure Helm.
- You can configure it in gitlab/auto-deploy-values.yaml
- It uses Helm to Auto Deploy.
- Helm is in the image.
- You can do Helm customisation.
- The customisation doesn’t have to be in variables, you can override how Helm is working it in gitlab/auto-deploy-values.yaml
- Problem: The Helm configuration in gitlab/auto-deploy-values.yaml is pretty hidden from the user.
- Auto deploy supports canary deployments.
-
Auto-built always builds a Docker image - it’s mainly about Docker. It uses 3 main approaches:
Setting up Auto DevOps on Self managed GitLab
- Not very different from GitLab.com
- You need Runners for it to run on.
- If using Auto deploy, you need
- access to a Kubernetes installation and at least one Kubernetes cluster.
- Tiller and Ingress installed by GitLab.
- If using Auto-Monitoring you need
- Prometheus
- If your GitLab instance is not connected to the internet you need to pre-download the container images.
- You might have limited functionality.
- The CI.CD templates are shipped with GitLab
Summary of what’s included in Auto DevOps
Auto DevOps various components walkthrough and evaluation (from @juliebyrne)
-
Problem: Pipelines run for any type of updates. Can consume your CI minutes, pipelines fail etc creating a bad experience.
-
Problem: If it’s turned on on the instance level pipelines will run for any type of project in the organisation. Not all projects are code projects.
-
Problem: If you are using gitlab-ci.yaml templates, they are going to be pinned to the version of gitlab you are using. For e.g. if you upgrade your gitlab to 13.9, the yaml file will be on 13.8 but the job template files will run the version that’s available in 13.9
- the job templates in the file point to the latest job templates on the master branch and not necessarily the job that your pipeline is running.
- Problem: Each job uses a docker image that contains everything that job needs to be executed successfully. The docker image also has a version and sometimes it’s pointing to an older docker image version.
How do Auto DevOps work: Things the user needs to know
- Scripts used for the job are described in auto-deploy bin directory (you can find it in the Docker file I think from the directories that are being copied (start with COPY)). a. The scripts are called from the Script section in the yaml file.
Auto-build Job
- Auto build sets two environment variables that point to the docker image that will be pushed to a container registry and then pulled from the registry to be deployed.
- If you have your own image that you want to auto deploy you have to set the variable path to the directory that contains your docker image (ALTERNATIVE APPROACH - WRITE YOUR OWN BUILD STEPS - BETTER THAN USING AUTO-BUILD FROM A PERFORMANCE PERFORMANCE. ALSO GIVES YOU MORE POWER AS TO HOW YOU WANT TO BUILD YOUR APP).
- Question: If you build your own docker image (push to the container registry) how does the user have to interact with Packages and Registries?
- If the Docker image path is not correct (or something like that) the error is too generic to understand what’s going on. Fix the error wording to say that “image not found”.
- If you have your own image that you want to auto deploy you have to set the variable path to the directory that contains your docker image (ALTERNATIVE APPROACH - WRITE YOUR OWN BUILD STEPS - BETTER THAN USING AUTO-BUILD FROM A PERFORMANCE PERFORMANCE. ALSO GIVES YOU MORE POWER AS TO HOW YOU WANT TO BUILD YOUR APP).
- Specific configurations need to be done around the Heroku built packs for the application to be built and deployed successfully.
- You need to specify a procfile if you want to do more complex configuration of how your application needs to run. For simple applications it’s fine not to use a profile.







