Analyze derailed reports for mem saving opportunities
We're running derailed
benchmarks as part of our CI pipeline and generate two reports:
- Boot memory (e.g. https://gitlab-org.gitlab.io/-/gitlab/-/jobs/554441068/artifacts/tmp/memory_on_boot.txt)
- Bundle memory (e.g. https://gitlab-org.gitlab.io/-/gitlab/-/jobs/554441067/artifacts/tmp/memory_bundle_mem.txt)
While we report anomalies through Danger, i.e. when memory increases e.g. due to adding a large dependency, we have (to my knowledge) not yet made a serious attempt at looking for how to shrink the existing memory footprint based on these reports, esp. with regards to the different Runtime
s we have.
For example, boot_memory
suggests that we are loading 52MB GraphQL schema file. Do we need this for every type of application we run, like Sidekiq or ActionCable? Are there perhaps gems used merely in the rendering pipeline that we don't need to load when running Sidekiq?
Furthermore, we currently run the boot bench with a cut-off point of .3MB. This can potentially lead to missing the death-by-a-thousand-cuts problem, since we might be excluding a large number of small files and dependencies, potentially giving the impression of much less memory used than is actually the case.
Goal
- Go through boot and static memory reports and identify potential memory hogs
- If there are any:
- can we load them conditionally / not in a particular runtime?
- can we delay their loading until we need them?
- Run the benchmarks with no
CUT_OFF
and see how large the delta is. Do we have a lot of smaller dependencies that add up to a large amount of memory used, or is it mostly a smaller number of large dependencies that's the problem? - Create issues for whenever there are actionable results