Runner is slow when executing Docker image that extracts a large number of files
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Proposal
We ran into the following two issues as part of trivy-db-glad cannot be built (#423588 - closed):
-
The
build-dbjob now takes over an hour whereas it used to take about 7 minutes. GitHub is able to execute this step in under 10 minutes, and we were able to get a GCP-based linux machine to complete this step in about 6 minutes, so there seems to be an issue with our runners. -
Running
wgetfollowed by piping the contents intotarresults in an error:wget -qO - https://github.com/CocoaPods/Specs/archive/master.tar.gz | tar xz -C cache/cocoapods-specs --strip-components=1 wget: error getting response tar: unexpected end of file
I believe both of these problems are related. The purpose of this issue is to figure out why our runner is so slow when executing the following:
wget -qO - https://github.com/CocoaPods/Specs/archive/master.tar.gz | tar xz -C cache/cocoapods-specs --strip-components=1
Here's an example pipeline that shows a failure from the above line: https://gitlab.com/adamcohen/slow-runner/-/jobs/5003107228. This pipeline took 11 minutes before it failed, meanwhile the same job on GitHub takes 2m48s and succeeds:
Once we figure out how to speed up this process and allow wget | tar to work consistently, we can remove the workaround added by Fix failing pipeline (gitlab-org/security-products/dependencies/trivy-db-glad!19 - merged).
