Performance of GitLab.com for the period of Sept-Nov 2019
@andrewn did an investigation after the incident from production#1421 (closed)
I loaded 3 months of Rails and Workhorse data into BigQuery. Here is our p95 over that time. As you can see, the past few weeks latencies have really spiked
How does that look when broken down by the 30 most expensive controllers in the application?
Not a big surprise, but Projects::MergeRequests::ContentController is incredibly expensive - there’s some evidence that this is leading to other problems, but we need to dig further.
Over the past three months, the relative amount of time we’ve spent processing ContentController calls has risen. This may be leading unicorn worker saturation.
Relative cost of all web controllers by the total amount of time they spend in Gitaly calls. When viewed in this way, the cost of ContentController is staggering.
Total cost of all web controllers in terms of total database call duration, over the past 3 months: I think it would definitely be worth understanding why GitHttpController is so expensive to the database
Same query, sliced by view time (render time). What’s going on with RootController that its becoming so expensive?