[Meta] Integrated Performance Testing
One of the major hurdles to embracing Continuous Delivery is ensuring that your company has the confidence to proceed with deployments in an automated fashion. As they mature on their journey to CD, they will continue to automate more and more of their SDLC and testing process. One of the important parts of testing a change or release, is ensuring that there have not been any performance regressions and it can handle the expected load. This can take a few forms, such as a short term stress test or a longer term soak test.
Performance testing is broader however than just ensuring your code doesn't crash in production. How responsive your service is to customers can have a significant impact on conversion, usage, and general perception of your brand. The best technology companies are fanatical about performance, because it helps to drive the bottom line.
Today GitLab does not have an integrated solution in this area, but customers can author their own via the CI/CD YML file. Prometheus can monitor and detect problems, but it cannot actually generate load. In a pre-production environment there typically is little to no load, and so monitoring results are not very valuable or actionable because they are not repeatable or representative.
Opportunities to help
GitLab has two broad ways that we can help our customers more easily performance test their applications.
- We can make it easier to integrate performance tests in their CI pipelines
- We can capture the performance data, analyze it, and present it in a concise way to users
Integrated performance testing
To help address some of the gaps noted above, we have an opportunity to integrate with performance testing tools to make getting started easier. In general these fall into a few broad categories:
- Load testing: Generate consistent and repeatable load pre-production, capture the metrics.
- Client-side testing: Capture performance metrics as an end user would experience it, for example within a browser loading all JS, CSS and dynamic web requests.
We can take steps to make it easier to embrace these, for example by automating the creation of the test scripts and CI job YML.
With a strong foundation of repeatable performance tests, we can then begin to analyze the results to make them more actionable and consumable.
- We can compare the performance difference between two branches, and display this difference to a reviewer before a change is merged. By displaying this directly on the MR, the reviewer will be instantly aware of the impact this change has on the user experience and performance of the servers.
- We can then also compare the historical performance of a branch over time. If a company is focused on performance, has that borne out mixed in with all of the other feature improvements and bug fixes? How significantly has performance changed over the past month, year?