Explore caching per VU on MR page
Team is planning to enable disabled_mr_discussions_redis_cache
FF in gitlab-org/gitlab#368366 (closed). As seen in gitlab-org/gitlab#368366 (comment 1059596857), performance results degraded significantly. The problem is that GPT doesn't emulate what browser does with caching. The issue is to explore if etag
header can be added to web_project_merge_request
test to emulate caching.
Thanks @patrickbajao for the details on how it works #524 (comment 1108446379):
An example scenario:
- User views first page of discussions (
discussions.json?per_page=20
).- User views second page of discussions (
discussions.json?per_page=30
).When user views the first page, it includes
etag
header in the response header. When user views the first page again, theetag
from the first first page request should be passed asIf-None-Match
on the second first page request. This is to tell the backend to serve the same page with the same etag.The
etag
from the first page request shouldn't be passed to the second page request as they're different pages.Browsers do this automatically that's why you can see the
If-None-Match
headers in the request headers. It gets theetag
of the request with the same URL asIf-None-Match
for the request of the same URL again.One way to replicate this behavior is to open a MR page with a number of discussions (more than 20) on a browser. Then refresh the page (not a hard refresh as that will clear cache). You can notice (while monitoring network via dev tools) that on a fresh page load, the first page request will return an
etag
in response header. On page refresh, the request for the same URL, will have the previousetag
asIf-None-Match
and the response status is 304 instead of 200.