Improve rendering performance of Merge Request Changes page with large amount of changes

Quality has recently built a new SiteSpeed pipeline to start testing various pages across the application in a lab like setting to find any browser performance rendering issues, much like we already do for server issues.

One page that is a clear offender is the Merge Request Changes page, which has a notably heavy hit on the browser - especially when the amount of changes are are higher:

NAME                                  | FCP (ms) | LCP (ms)         | TBT (ms)         | SI (ms) | LVC (ms) | TFR SIZE (kb) | SCORE | RESULT
--------------------------------------|----------|------------------|------------------|---------|----------|---------------|-------|-------
web_project_merge_request_changes     | 1538     | ✘ 12542 (<12500) | ✓ 26156 (<32500) | 9963    | 12616    | 1352.1        | 73    | FAILED

web_project_merge_request_changes-chrome-2020-08-28044323-video

The page being tested as part of the same test data we use with GPT on our 10k Reference Architecture from a client machine running SiteSpeed (Chrome) with the specs of a 4 core CPU and 16GB RAM. The page being targets is for a large but realistic merge request of 678+ files and +24260 -11587 changes. Of note a customer is reporting similarly bad performance with a smaller MR as performance looks to be based on what's visable on the viewport.

Quality are still in the process of codifying official rendering performance targets for us to target with GitLab's pages but as this page is such a clear offender it's better to start the process for this now rather than later. At this time the proposed targets, based on Web Vitals, are as follows:

  • LCP - Largest Contentful Paint (LCP) reports the render time of the largest image or text block visible within the viewport. Recommended target - 2500ms.
  • TBT - Total Blocking Time (LCP) measures the total amount of time where the main thread was blocked for long enough to prevent input responsiveness. Recommended target - 300ms.

As shown above the page is notably breaking these thresholds. Much like GTP issues this issue may be closed once notable progress has been made and then another raised to show progress if desired.

Finally as a suggestion we may want to look at imposing limits on this page much like we do for File Blobs (and being proposed for File Blame) for extremely large MRs.

Relevant notes

  • In the tests, it seems like some runs have the File Tree open. Could there be a bug in the product making this non-deterministic? 🤔
Edited by André Luís