Improve performance of Issues List with Search API under load into next tier
Issues List API with search parameter specified is unperformant compared to hitting the API without:
█ Results summary
* Environment: 10k
* Environment Version: 13.11.0-pre `05a6be0c411`
* Option: 60s_200rps
* Date: 2021-03-24
* Run Time: 1m 5.36s (Start: 18:59:35 UTC, End: 19:00:40 UTC)
* GPT Version: v2.6.1
NAME | RPS | RPS RESULT | TTFB AVG | TTFB P90 | REQ STATUS | RESULT
------------------------------|-------|--------------------|-----------|-----------------------|----------------|-----------
api_v4_projects_issues_search | 200/s | 33.55/s (>24.00/s) | 5416.45ms | 10184.66ms (<11000ms) | 100.00% (>99%) | Passed
api_v4_projects_issues | 200/s | 195.41/s (>96.00/s)| 303.10ms | 362.84ms (<2000ms) | 100.00% (>99%) | Passed
The testing was done on our test 10k Reference Architecture environment as standard at 200 RPS with the project being tested a copy of gitlabhq (tarball can be found here), which has around 6696 issues.
The search term is being randomly generated during the test using this array of words.
Looking at the environment metrics, PG node CPU usage is being maxed out:

Steps to reproduce
- Check out the Performance Tool
- Run the specific test with the
run-k6command.
What is the current bug behavior?
The results above show that the Issues List API with search parameter has a P90 of 10184.66ms.
What is the expected correct behavior?
As per our performance targets this endpoint is above the first tier target of 9000ms which is severity1 . Task is to improve the endpoint's performance into next tier.