Kubernetes Executor Cache Documentation

Problem to solve

The existing docs for the kubernetes executor say that a /cache volume is mounted on the pod, and that the runner will check that directory for cache files, and if not found download them from the configured storage. There are several parts of this that I find unclear, and it also doesn't seem to match the behavior I observe in my cluster.

Further details

My runners are version 17.8.3, deployed with chart version 0.73.3 into an EKS cluster version 1.31. They are configured with s3 as a cache location.

Some questions I have after reading that section:

  • Do I need to add that cache volume mount myself? It doesn't seem to happen automatically with my current configuration.
  • Is this volume supposed to be attached to the runner itself or the worker pods that it spawns?
  • Does the line If not available, the cached data is downloaded from the configured storage mean that it will check the volume for cache files first and then fall back to downloading from s3? That's how I read it, and exactly the behavior I'm looking for. I was planning to set this up myself using the AWS s3 mountpoint CSI driver, but if the kubernetes executor can do this natively that would be much easier, I could just mount a hostDir volume and let all runner pods that get scheduled on each node share a local cache to reduce s3 network calls.

Proposal

Update the documentation to make the answers to the above questions more clear, or remove it all together if it's no longer relevant.

Who can address the issue

Some one familiar with how the kubernetes executor handles cache.

Other links/references