Skip to content

Support fast and dynamic event count statistics

Desired used cases

  • Both event and transaction events can be quickly filtered on a selected time range.
  • A 2 core, 8GB server can handle up to 20+ million events per month. There is some flexibility here as long as it can scale reasonably.
  • Only redis and postgres may be used
  • It must be fast both at event ingest and querying results. Fast is defined as faster than prior behavior based on a reproducible metric.
  • Event ingest needs to respond fast - this is important for the php sdk which makes synchronous event requests

I'm not yet convinced that redis is the only answer, but if it was I would consider giving up on an optional redis.

Implementation ideas

  • Store daily/hourly stats in a postgres database
  • Store stats in redis and make use of INCR. Be careful about extreme memory usage.
  • Break up APIs to get stats separately from basic issue data - limited on what this really solves, a "priority" sort needs to account for recent stats.
  • See if materialized views are faster than the debounced update index task.
Edited by David Burke