Autocomplete is not performant
Bandwidth and page size
Right now in this project we're downloading a 600KB file (and growing!) on every page with an input field. This is not acceptable. It can't be cached for any significant period of time because issues are constantly being closed/opened, MRs are being opened/merged/closed, and users are joining/leaving projects.
This uses people's bandwidth and probably puts a heavy load on the backend.
Due to the fact that this is being loaded on every page we have set arbitrary limitations for what is autocomplete-able.
- Merge Requests: No merged/closed MRs.
- Issues: No closed issues.
- Users: Only users in the project, we've had multiple requests for instance-wide autocomplete of usernames.
Because MRs and Issues frequently get opened, merged, and closed, including this data makes the rest of the (relatively static) data uncacheable. This can be fixed by splitting the
autocomplete_sources.json file into parts (e.g. as I attempted to do with the entirely static emoji data in !4329 (closed)), but that's a fairly temporary solution.
We also can't reuse this data across users because of confidential issues.
There are two main ideas I have for solving these problems:
Have autocomplete load data dynamically when the user requests it. For example, the user types
Split each autocomplete type into a separate JSON file so they can be cached and the cache can be busted independently (for example, labels and milestones would likely change less often than users, which would change less often than Issues/MRs). We could also serve confidential issues separately from public issues, so the public issues can be cached fully without worrying about user permissions.
Personally I prefer the first approach, although it'd be more difficult to implement. We shouldn't ever be downloading that much data even if it is cached heavily, and it's essentially just putting a band-aid on the problem. I also don't think removing autocomplete entirely isn't on the table given how useful it is.
We need to fix this as it does not scale well, and is putting undue pressure on the user to download such huge files.