Investigate memory usage of CI builds
In https://gitlab.com/gitlab-org/gitlab-ce/issues/58882, we've seen out of memory issues in CI using Google's n1-standard-2
machines, which have 2 CPUs and 7.5 GB of RAM. Part of it is that we launch a lot of services:
- PostgreSQL
- Rails
- Gitaly
- Elasticsearch
- Chrome
- etc.
It's possible we have a memory leak in our system or something else that is causing this to fail more frequently now than it was before.
We should profile these CI runs and see if there's anything that's obviously taking more RAM than it should.
UPD: posted by Stan on June 4th, useful to have this info here:
We've bumped the CI runner types from Google's
n1-standard-2
(7.5 GB RAM) ton1-highmem-2
(13 GB RAM) in the hope that this will solve some out of memory build failures (https://gitlab.com/gitlab-org/gitlab-ce/issues/58882). It may take up to 24 hours for all the old machines to cycle through.
(https://gitlab.slack.com/archives/C8PKBH3M5/p1559676972003100)
UPD 2: (the implementation in the scope of this ticket we agreed on; it was merged):
We add .csv
generation during CI test run giving us information about the amount of free memory after the given test example (it uses the free -m
for that).
The CSV is generated during the coverage
job, it could be found in artifacts: tmp/memory_test/report.csv
, and also per-example-group separate CSVs will be there too.
Every RSpec job also produces such artifact for their examples (could be found in tmp/memory_test
)
It can come handy to see if we leak memory somewhere.