Use cloud native buildpacks for Auto DevOps
Problem to solve
Currently, we use Herokuish as part of Auto DevOps to match your project with a known language and build it. However, herokuish is not cloud-native, so the resulting Docker images are very large and inefficient, taking a longer build-time and producing larger image. It would be ideal to increase the speed and efficiency of Auto DevOps.
Intended users
developers, operators, devops eng
Proposal
Cloud Native Buildpacks (CNCF project) provides a cloud-native way to provide a standard mechanism for detecting code and producing a standards-based compliant container runtime. The benefit is a separation between development and operational execution of the container runtime.
Update our auto-build
stage to follow v3 CNB lifecycle and use v3 buildpacks where available (by making use of pack-cli https://github.com/buildpack/pack). If not available v3 buildpack is present we should fall back to a v2 buildpack. The latest release of CNAB (v0.4.0) uses an image builder with a shim that supports all the officially supported Heroku v2a buildpacks, which is the same as with Herokuish.
With CNAB there is a build image (where you might install build-time system dependencies, such as developer libraries etc) and a run image (which only has system dependencies necessary to actually run code). They are used together, such that all app code/dependencies get installed into the workspace, and those workspace layers that are generated from the build are "rebased" onto the run image.
At the moment, there is an image that is provided by Heroku that provides compatibility with their platform and it uses existing Heroku "v2a" buildpacks with a shim.
Why now?
The beta for cloud native buildpacks has been extended to 5 major languages, see https://blog.heroku.com/docker-images-with-buildpacks
As part of this release, we’ve created a Heroku buildpacks builder image for Ruby, Node.js, Java, Python, PHP, and Go that works with the CNB tooling.
This means almost 50% of languages supported by legacy buildpacks can now use CNB. Seems like a good time to spend more time looking into this
Language | Type |
---|---|
Ruby | CNB |
Node.js | CNB |
Clojure | legacy buildpack |
Python | CNB |
Java | CNB |
Gradle | legacy buildpack |
Grails 3.x | legacy buildpack |
Scala | legacy buildpack |
Play 2.x | legacy buildpack |
PHP | CNB |
Go | CNB |
Additionally, once CNAB gets stable enough, maintainers of V2 will archive the gliderlabs/herokuish
project, so we would have to maintain a fork on our own. Maintainers have stated that given the pace of the CNAB project, this could happen sometime in Q12020 at the earliest.
Testing
What does success look like, and how can we measure that?
- Continue to measure the usage of Auto DevOps pipelines once CNAB are in place.
- Measure build/pipeline time using CNAB vs v2 buildpacks and document the gains.
Links / references
- v3 lifecycle specification available at https://github.com/buildpack/spec
- reference implementation at https://github.com/buildpack/lifecycle
- https://www.cncf.io/blog/2018/10/03/cncf-to-host-cloud-native-buildpacks-in-the-sandbox/
- https://github.com/buildpack
- https://buildpacks.io/
- https://github.com/buildpack/spec
- https://github.com/buildpack/lifecycle
- Cloud-native buildpacks slack group https://slack.buildpacks.io/
original issue content
Problem to solve
Cloud Native Buildpacks (CNB) recently joined the CNCF, with involvement from Heroku and Pivotal:
- https://www.cncf.io/blog/2018/10/03/cncf-to-host-cloud-native-buildpacks-in-the-sandbox/
- https://github.com/buildpack
- https://buildpacks.io/
CNB aims to provide a standard mechanism for detecting code and producing a standards-based (https://www.opencontainers.org compliant) container runtime. The benefit of using buildpacks over a fully-implemented CI/CD pipeline encoded in yaml or some other format is that this creates a separation between development and operational execution of the container runtime. In short, developers can build applications in a general way, without being overly aware of operational concerns and best practices. As soon as code is checked in, a buildpack can run which detects the code in the repository and automatically builds an OCI image. This can then automatically be picked up and flow through the path to production according to pre-set rules and requirements.
We have some of this today. We are already using Heroku buildpacks in CI and AutoDevOps/CD are already using containers to orchestrate deployments in GKE. Using and contributing to this project instead of building our own solution benefits us in a few ways:
- Supporting CNCF/Open Source in general
- Container becomes natural handoff point between CI and CD to avoid blurring concerns
- Not having to invent our own separate solution
- Leverage buildpacks being developed and improved upon by many open source contributors
- Better interactivity with other products using compatible solutions
- Opportunity, if involved early, to help shape the project
The state of the project at the moment is that there is a specification and reference implementation, but neither Heroku or Pivotal have introduced the feature in their commercial products.
Target audience
- For Sasha (Developer), buildpacks in GitLab would provide assurance that software is being built and deployed with best practices built in; Sasha can focus on features and requirements in the app itself, and not worry about how it's going to be packaged up and delivered to different environments.
- Devon (DevOps Engineer) benefits by knowing that developers are using a proven, industry standard approach for generating containers to be deployed. Containers generated by development teams can be more trusted since they are more consistent.
Further details
This requires work on the CI side as well as the CD side.
- CI should be updated to prefer using CNB with the primary output of CI being an OCI image and associated metadata.
- The deployment portions of AutoDevOps should be rewired to take advantage of additional operational capabilities provided by OCI/buildpacks.
There is a v3 lifecycle specification available at https://github.com/buildpack/spec and a reference implementation at https://github.com/buildpack/lifecycle.
Proposal
Update CI/CD (AutoDevOps) to follow v3 CNB lifecycle which maps to GitLab as follows, and where everything in that flow beyond Git/Code is handled by GitLab automatically using smart defaults/predictable behavior. Infrastructure (owned by ~Configure) is implicit above somewhere between container and cluster.
Git/Code -(CI BuildPack)-> Container -(CD AutoDevOps)-> Cluster
We can also:
- Consider contributing to CNCF project
- Consider sponsoring CNCF project (not sure if this is possible?)
What does success look like, and how can we measure that?
Measure adoption of CNB in CI/CD pipelines