Modify redis caching strategies to reduce maxmemory events on redis-cache instances
Redis-cache experiences chronic latency events that correlate with bursts of key evictions once the instances have hit maxmemory. [The scalability team has investigated the problem](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601) and have identified next steps. The short term fix is to modify the caching strategies through a number of methods (lower ttl, switch to client side caching, and others as appropriate) which will extend the period of time between eviction events and should [lessen the impact](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_1017758435) to the stage groups error budgets and allow the scalability team time [to shard a subset of this data to a new cluster](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1821). This epic will track the caching changes required to reduce the memory usage of the redis instancess and thus improve the error budgets for the stage groups.
epic