Skip to content

Consider increasing output_limit on larger SaaS runners

All SaaS runners are currently configured with default Maximum build log size output_limit value of 4096 (4MB). We have been seeing requests from customers (see https://gitlab.zendesk.com/agent/tickets/349712 for example) that this value is significantly lower than what our competition offers (Azure DevOps, BitBucket) and makes it impossible to work with CI jobs with high log verbos

Can we consider increasing this value for Medium and Large SaaS runners and analyse possible impact?

Slack thread for more details: https://gitlab.slack.com/archives/C048DKF1PT4/p1670430548801859

Comments from @tmaczukin:

  • We save the trace in a file buffer on the runner manager host. Bigger output means more disk space used on that host. I'd need to check what are the current usage values, estimate how much it would take for the maximum possible number of jobs executed in parallel and with that we would know if yhis is a problem or not. Apart of that, I don't think there are any ither blockers. Traces in GitLab on SaaS are stored temporarily in Redis and after job is done they are transferred to GCS as job artifacts. 4/6/10 MB artifact should not be a problem here.
  • We will need to update chef-repo definition (after we will decide that we can go with the increase)
  • What is the size of output limit that we would aim for?
  • Which now brings to me an idea that we should have in runner metrics a counter increased each time a job exceeds that limit - we would see what scale of customers is even affected by the value we have configured. This should be fairly simple to do - I'll draft a MR later 🙂

/cc @DarrenEastman