helper artifact upload bottleneck - parallel gzip?
It seems that the gitlab helper is mostly CPU bound in uploading artifacts.
Has it ever been considered to use a parallel gzip implementation? Eg, https://zlib.net/pigz/ I'm already using FF_USE_FASTZIP + CACHE/ARTIFACT_COMPRESSION_LEVEL.
In my set up I have a few dozen runners and a local s3 object storage (ceph). Each runner uploads /dev/urandom generated binaries (at ~10MB/s. There is very little difference whether I run a single artifact upload or in parallel on those dozen runners. (ie, the back-end can handle it.)