'cat' of a large file fails (probably because stdout is a non-blocking FD)
We have a CI job that collects the output of make
into a logfile (because it is huge), but then if make
fails we use cat log
to show the logfile in the CI output such that we can debug what went wrong.
However, that does not work as expected: after a good thousand lines of log output, cat
stops and says
cat: write error: Resource temporarily unavailable
From what I found online, this indicates an EAGAIN error. Such an error arises when writing to a non-blocking FD whose buffer is full. The buffer being full is not surprising, since we are feeding a large amount of data into it very quickly; the surprising part is that the FD for stdout seems to be non-blocking. This is not a supported configuration for most standard Unix tools, they expect stdout (and stderr) to be blocking FDs.
Does that sound plausible, is it possible that gitlab-ci-runner invokes the jobs in a way that the FD for stdout is non-blocking? If yes I think that is a bug, and it can lead to trouble that is quite hard to diagnose, as in our case.
(As a work-around we now do something like cat log || (echo "Dumping the whole log failed; here are the last 100 lines" && tail -n100 log)
, which helps, but of course now we have to wonder whether there is something important in the part of the log that we cannot see...)