Load/performance testing and impact of merge request
Problem
One of the major hurdles to embracing Continuous Delivery is ensuring that your company has the confidence to proceed with deployments in an automated fashion. As they mature on their journey to CD, they will continue to automate more and more of their SDLC and testing process. One of the important parts of testing a change or release, is ensuring that there have not been any performance regressions and it can handle the expected load. This can take a few forms, such as: short term stress test or longer term soak test on the backend side, as well as analyzing the client side performance.
Performance testing is broader however than just ensuring your code doesn't crash in production. How responsive your service is to customers can have a significant impact on conversion, usage, and general perception of your brand. The best technology companies are fanatical about performance, because it helps to drive the bottom line.
Today GitLab offers analysis of a merge, but it is only marginally useful prior to production. This is because pre-production load is highly inconsistent and therefore hard to compare, without an artificial way to generate it. We currently don't have an an integrated solution to solve this, although customers can author their own via the CI/CD YML file.
Opportunities to help
GitLab has two broad ways that we can help our customers more easily performance test their applications.
- We can make it easier to gather meaningful performance data within their CI pipelines, prior to production
- We can capture the performance data, analyze it, and present it in an intelligent way to users
Integrated performance testing
To help address some of the gaps noted above, we have an opportunity to integrate with performance testing tools to make getting started easier. In general these fall into a few broad categories:
- Backend load testing (https://gitlab.com/gitlab-org/gitlab-ee/issues/3016): Generate consistent and repeatable load, and capture the quantitative data. From there we could include minimum thresholds at which the CI test fails, along with storage of the latest copy of this data for each branch.
- Browser performance testing (https://gitlab.com/gitlab-org/gitlab-ee/issues/3046): Capture performance metrics as an end user would experience it, for example within a browser loading all JS, CSS and dynamic web requests. This could include data like: overall page size, time to establish a connection, time to first paint, time to interactive, etc. From a UX perspective, these can be some of the most critical metrics to keep users happy.
We can take steps to make it easier to embrace these, for example by automating the creation of the test scripts and CI job YML.
Analytics
With a strong foundation of repeatable performance tests, we can then begin to analyze the results to make them more actionable and consumable.
- We can compare the performance difference between two branches, and display this difference to a reviewer before a change is merged. By displaying this directly on the MR, the reviewer will be instantly aware of the impact this change has on the user experience and performance of the servers. (https://gitlab.com/gitlab-org/gitlab-ee/issues/3534)
- We can then also compare the historical performance of a branch over time. If a company is focused on performance, has that borne out mixed in with all of the other feature improvements and bug fixes? How significantly has performance changed over the past month, year?
Proposed solutions
- https://gitlab.com/gitlab-org/gitlab-ee/issues/3016: Integrated Load Testing
- https://gitlab.com/gitlab-org/gitlab-ee/issues/3046: Integrated Browser Performance Testing
- Show closed items