Modify redis caching strategies to reduce maxmemory events on redis-cache instances

Redis-cache experiences chronic latency events that correlate with bursts of key evictions once the instances have hit maxmemory. The scalability team has investigated the problem and have identified next steps.

The short term fix is to modify the caching strategies through a number of methods (lower ttl, switch to client side caching, and others as appropriate) which will extend the period of time between eviction events and should lessen the impact to the stage groups error budgets and allow the scalability team time to shard a subset of this data to a new cluster.

This epic will track the caching changes required to reduce the memory usage of the redis instancess and thus improve the error budgets for the stage groups.