Improves performance of runners list with large number of jobs
What does this MR do and why?
Related to #425767 (closed)
This change aims to improve the performances of runners lists which contain large number of jobs. As each job count increases the time it takes to load a runner, we split this field in a separate request.
Additionally, this change batches queries by a "batchKey", so the client can opt-in to handle the job count queries in batches to limit the number of outgoing API requests from the browser.
Changelog: changed
Technical Details
In !132433 (merged) and !130911 (merged) we removed the isSingleRequest option to have all our queries fetched in single requests. isSingleRequest
made batching opt-out.
In this MR, I propose adding add batchKey
option to make batching opt-in, as this will allow us to optimize loading times for tables and lists that face similar N+1 situations: For example: An issue list with a large number of comments, an environment list with a large number of deployments, etc...
Screenshots or screen recordings
Note: The default batchMax
value is 10
, so our jobCount
requests will be grouped in more than one page, which makes sense as this data can be split in multiple batches with no consequences.
request for runners | request for runner jobs count |
---|---|
How to set up and validate locally
Numbered steps to set up and validate the change are strongly suggested.
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Related to #425767 (closed)