Move free user cap caching strategy to enforce cap
What does this MR do and why?
- move request cache from only being on
over_limit?
method to the more global usedenforce_cap?
method. - this will allow us to leverage caching where it counts(
enforce_cap?
is called more places and by other things) and not adding more complexity by adding yet another caching concept intoover_limit?
. Nowover_limit?
's most costly items are all covered by caching and it operates as an almost pure collection of query methods. - Also, with the addition of checking storage size limit(that has some caching as well) in !107000 (merged), we are ensuring we guard that per request in
enforce_cap?
as well. - change from
over_limit?
not caching by default toenforce_cap?
caching by default. This allows less develop weight in figuring out if they want the cached version or not and getting it wrong. Currently the only use for not caching is now the method that fetches the real result when it isn't in cache.
Analysis
When adding all the benchmarking together and comparing caching concepts at the over_limit?
level of no cache vs full method cache vs proposed enforce_cap?
cache:
results
--> Benchmarking over limit with request caching and without
Calculating -------------------------------------
without cache 216.000 i/100ms
without cache and database update
234.000 i/100ms
with full method cache
2.026k i/100ms
with enforce cap cache
705.000 i/100ms
with enforce cap cache and database update
884.000 i/100ms
-------------------------------------------------
without cache 2.168k (± 6.7%) i/s - 10.800k
without cache and database update
2.326k (± 7.3%) i/s - 11.700k
with full method cache
20.364k (± 5.6%) i/s - 103.326k
with enforce cap cache
7.013k (± 6.0%) i/s - 35.250k
with enforce cap cache and database update
8.895k (± 6.6%) i/s - 45.084k
Comparison:
with full method cache: 20363.5 i/s
with enforce cap cache and database update: 8894.9 i/s - 2.29x slower
with enforce cap cache: 7013.2 i/s - 2.90x slower
without cache and database update: 2325.8 i/s - 8.76x slower
without cache: 2168.4 i/s - 9.39x slower
Metrics in isolation only mean much, and I don't think the cost is that great in real world use of these methods. However, we'll preserve the largest chunk of savings when we move to the enforce_cap?
caching as seen below by the 6x
saving when using query cache:
results
--> Benchmarking enforce_cap with request caching and without
Calculating -------------------------------------
without cache 298.000 i/100ms
with cache 1.878k i/100ms
-------------------------------------------------
without cache 3.157k (± 6.2%) i/s - 15.794k
with cache 19.062k (± 5.3%) i/s - 95.778k
Comparison:
with cache: 19061.6 i/s
without cache: 3156.8 i/s - 6.04x slower
Finally we'll compare updating the database in over_limit?
and not to show that cost is negligible since it is is also partially covered by Rails cache and lookup cost is not high
results
Running via Spring preloader in process 98795
Run options: include {:focus=>true, :locations=>{"./ee/spec/models/namespaces/free_user_cap/enforcement_spec.rb"=>[305]}}
Test environment set up in 7.948658 seconds
Missing metadata feature_category: ./ee/spec/models/namespaces/free_user_cap/enforcement_spec.rb:305 See https://docs.gitlab.com/ee/development/testing_guide/best_practices.html#feature-category-metadata
--> Benchmarking over limit with request caching and without
Calculating -------------------------------------
with database update 668.000 i/100ms
without database update
836.000 i/100ms
-------------------------------------------------
with database update 7.031k (± 7.0%) i/s - 35.404k
without database update
8.714k (± 5.7%) i/s - 44.308k
Comparison:
without database update: 8714.2 i/s
with database update: 7030.8 i/s - 1.24x slower
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Related to #383264