ATF extracts for report
sXs as SDMX Booster
IN PROGRESS
Performance improvements of Data Explorer table rendering
tags: performance (UI), architecture (webapp)
- ratio between parsing and rendering
- pagination is not related to rendering time
- pagination is an additional way to browse data and may induce smaller tables
- table size is already configurable
- features of the table (blank lines, hierarchies, etc...) are impacting parsing time
- rendering time is impacted by the number of elements rendered as well as UI features like floating sections header
description
Live tracking system
tags: performance, logging
description
There are several UI elements that are critical from the perspective of performance.
The most iconic element is the table. The table should be rendered as fast as possible!
But, there are few concerns:
- do we have the same performance expectation?
- do we have the same definition of the "rendering" (requesting OR parsing OR rendering OR all of them)?
We already have automated tests in place to track the performance of the table. But, there are also some concerns:
- data (ie mocks) is not representative enough
- layouts, annotations and settings (flags, displays, combined, etc…) are minimalist
- it requires a lot of maintenance
- it slows the devops process (tests are run several times to avoid edge cases)
We are not alone to face these concerns and there is a different approach to track performance: the live monitoring.
The idea is to gather information from live apps instead of running tests before pushing releases.
The benefits of the approach solves our concerns:
- measured performance can be compared if siscc members rely on the same tool
- requesting, parsing and rendering time can be segregated and measured independently
- time consuming performance tests during devops are not required anymore
- the fear of not covering anything is gone since because real usecases are monitored
Mutualisation of data is key in order to compare performance expectations and have a siscc consensus on performance.
Sentry (https://sentry.io/) is a good candidate, it has a lot of useful features and is adapted to React.
Sentry will also help to monitor errors on the client side (we don't currently have logs of what happen in client browser!).
Support will be eased because developers will be able to work on errors with logs without asking members to reproduce the error or attach logs to support tickets. In addition, there is a session replay feature in Sentry allowing developers to debug with visual context.
Last but not least, support tickets are difficult to address because they don't always have a reproductible usecase or meaningful logs attached to them. These issues are fixes by Sentry but ordering, sorting and merging support ticket is still a fastidious and time-consuming task.
Sentry, by reporting real errors, helps us to organize support more efficiently:
- developers don't always need the member to report an error
- reported errors are technical, they can be automatically grouped by similarity
- siscc can define a generic approach, with exceptions, ie fix most frequent errors first or fix errors attached to particular members first
Sentry is not a free tool (25$ per month) but, like gitlab, it can be included in the siscc, integrated in the .stat suite to have feedbacks from all orgs.