Display an error when service containers are OOM-killed
Description
When a service consumes more memory than is allowed, the k8s runner does not provide any indication in the runner logs that a service has been OOM-killed by Kubernetes. From the build container's perspective, the service exists in the hosts file but attempts to connect to it will (understandably) fail. This makes it difficult to diagnose the problem, as it seems like a configuration/connectivity issue to users who don't have access to the Kubernetes Dashboard to see the "Terminated: OOMKilled" message.
Besides that, the helper container and the corresponding pipeline job continue to run, happily unaware it will be unable to complete its job, until the configured timeout for the pipeline is reached.
Proposal
Whenever a service is OOMKilled, display an error within the build logs so users are aware this has happened.
Any existing container and pipeline jobs should be terminated when such an event occurs, assuming that the Runner won't be able to recover from losing one of its services to the OOM-killer.
Links to related issues and merge requests / references
n/a