Skip to content
Snippets Groups Projects

Use alpine as base image

Merged Nick Ilieskou requested to merge nightly_build into main

What does this MR do?

  • We use alpine image as a base image. Then we install trivy on top of that image.
  • We extract the Trivy version to install in a file. This will enable us to easily update the trivy version.
  • Created a script used to download and install trivy. We download the correct zip file depending on the architecture
  • Updated build-docker-image.sh in order to pass the architecture as an argument during the build process.

Why are we doing this?

We want to update the base image daily. The trivy image currently used extends the alpine image. We rely on trivy itself to fix vulnerabilities of the base image. In order to support quick vulnerability fixes in the base image we use alpine as a base instead of the trivy image. This way we can rebuild the image daily and get the latest version of the base.

What are the relevant issue numbers?

Implement nightly build policy for Trivy K8s Wr... (gitlab-org/gitlab#444470 - closed)

Edited by Nick Ilieskou

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
    • Resolved by Nick Ilieskou

      @nilieskou as asked on slack I had a quick look on this MR and I've just left a non-blocking suggestion.

      Also, this is out of scope of this MR but I'm wondering why we don't bundle the Trivy-DB in this k8s wrapper like we do for CS? Looking at the code, I'm assuming this is pulled at runtime for each scan, right?

  • added 1 commit

    Compare with previous version

  • added 1 commit

    • 89a88c6a - Apply 1 suggestion(s) to 1 file(s)

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • Philip Cunningham approved this merge request

    approved this merge request

  • added 1 commit

    Compare with previous version

  • Nick Ilieskou reset approvals from @philipcunningham by pushing to the branch

    reset approvals from @philipcunningham by pushing to the branch

  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

    • Resolved by Nick Ilieskou

      @hacks4oats I would really appreciate if you could take a look on this MR. For some reason the arm64 build job is failing. It seems like its timing out whenever it tries to perform an action that reaches the internet like apk update or wget. I think I might have a mistake in my Dockerfile or setup.sh script. I could use a second pair of eyes here and you are very good with Dockerfiles. Thanks in advance.

    • Resolved by Nick Ilieskou

      In order to support quick vulnerability fixes in the base image we use alpine as a base instead of the trivy image. This way we can rebuild the image daily and get the latest version of the base

      @nilieskou, if trivy is based on alpine, wouldn't adding apk update/upgrade to the existing build achieve the same result?

      I understand that the image would grow a bit, because this would add another layer with the updates but it might be a quicker iteration than replacing the whole image.

      Sounds like you've done most of the work anyway, but I wanted to ask to see if I'm off.

      Edited by Thiago Figueiró
  • added 1 commit

    Compare with previous version

  • added 1 commit

    Compare with previous version

    • Resolved by Nick Ilieskou

      @hacks4oats I am trying to see why the arm64 job was failing in the first place.

      When I committed the changes you suggested the pipeline was green: bc41907a but after my new commit ( 60d1e057) introducing the ENV command the pipeline is timing out. This is weird right?

      On another front I have forked this project and tried to use buildx which seems to work. Of course migrating to a version of the project where we use only buildx would require quite some work since its not straight forward. Mainly about how we push our images. We should keep this out of the scope of this issue.

      Edited by Nick Ilieskou
  • added 1 commit

    Compare with previous version

  • added 1 commit

    • 3b44c5b4 - Added --network=host to docker build

    Compare with previous version

  • added 1 commit

    • 1edbc88a - Added --network=host to docker build

    Compare with previous version

  • @philipcunningham Could you please take another look on this MR? 2 things have changed since you last looked:

    • We solve the problem with the failing build job on arm64 by adding command: ["--mtu=1400"] in the Dind Service
    • I applied all the refactor points of @hacks4oats from !30 (comment 1911214296) (except adding an ENV command instead of exporting the PATH)

    Thanks in advance

    Edited by Nick Ilieskou
  • added 1 commit

    • 7027c384 - Specify mtu instead of --network=host

    Compare with previous version

  • Nick Ilieskou mentioned in merge request !32 (merged)

    mentioned in merge request !32 (merged)

  • Philip Cunningham
  • Philip Cunningham approved this merge request

    approved this merge request

  • Thanks, @nilieskou. I've added one nitpick but, otherwise, this LGTM. Approved!

  • Why do we need MTU 1400 for DinD

    From slack conversation and particularly this comment:

    Hosted Runners are hosted in Google Cloud, and in GCP the VPCs by default have smaller MTU. The runner team had to reconfigure host level Docker Engine to match this MTU. This is why --network=host works.But if you start a DinD process, it starts the nested Docker Engine configured with the default. And the default doesn't detect MTU on the host - it's simply hardcoded to 1500. Which is more than 1460 set by the Runners team on the host level. For many external networking requests that not a problem at all. Packets are smaller anyway and they don't hit MTU limit. But if you will try to generate bigger packets, especially if Do not fragment is set (some tools may do that), if the MTU doesn't match here strange problems can happen.Apparently, for some reason, it is even more visible on the arm64 nodes. And that's why aligning Docker-in-Docker MTU - with what the rest of our networking stack uses - helps. Because tools like apk generate packets that fit in the MTU definition of the host.

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading