For some reason the realtime_changes endpoint was considered NOT_CHANGED and continued returning the old description. This caused the new description to be replaced seconds after loading the page.
After some time, it looks like realtime_changes fixed itself:
I don't see changes in the backend code that could've caused this. Can't rule out a saturation issue with a downstream service, such as Redis, but as far as I can see the /realtime_changes endpoint loads the issuable each time and returns the current state as in the database.
This is strange though because it implies the update hadn't been persisted to the database yet 🤔
@donaldcook are you aware of any changes on the FE?
@johnhope None that I am aware of. We did some work on the description component around tasks, but that was not merged in until the 18th (!81294 (merged)), so don't think that is it. The underlying Polling service hasn't been touched in over a year.
For some reason the realtime_changes endpoint was considered NOT_CHANGED and continued returning the old description.
This is just how our ETag-based polling works. When the backend returns a 304, it actually returns an empty response body. But with 304s, browsers treat that as "use the previous response".
I think it was @mikegreiling that reported this happening before.
I think there are many edge cases where the response to a poll would be the old description but the latest ETag. An example would be DB replication lag. It could also happen if a description happen and it fails on the ETag update part.
Unfortunately, this does not fix itself on the next poll and actually breaks things even after refresh as described in this issue. The workaround for this would be to clear the browser cache or click "Disable cache" in the inspector.
If we can move these polling endpoints to use subscriptions, we won't have these types of problems.
We're likely to continue to have problems during periods of DB pressure but I think the time spent building mitigations to that would be better spent on #300208.