Table performances benchmark
Following work implemented in #873 (closed) all previous benchmarks defined in #756 (closed) should be rechecked again and completed with more usecases involving combinations, hierarchical codelists, metadata and large set of observations.
- the benchmark should not be volatile tests anymore
- the benchmark should be a manual step in the pipeline (like e2e test) or a normal step in visions if there is no timeout issue
- if there is timeout issue, the benchmark should be a script at least to launch after a big feature locally
IDEAS:
- https://github.com/GoogleChrome/web-vitals#send-the-results-to-google-analytics -> create perf events to send to GTM to track UX/perf
- track rendering time: load AND interaction -> use afterFrame for interaction
- https://github.com/GoogleChrome/lighthouse-ci in timespan mode to investigate
Edited by Nicolas Briemant