FY23 Q1 - KR1: Establishing a repeatable usability benchmarking process
Background
Today, we don't have a view into how our present-day experiences are performing, from a usability perspective. While we do measure usability via System Usability Scale (SUS), that measure is at the system (aka 'product') level. This KR is designed to establish a process that allows stages to obtain a detailed view into how users perceive the usability of a given Jobs To Be Done (JTBD) within our stages, as they exist today.
The Problem
GitLab has been measuring the System Usability Scale (SUS) score at the product experience level for many quarters now. While this is an effective approach to measuring the usability of a product, we have had some challenges when trying to identify what to address to ultimately improve the score. Our biggest problem with SUS is that, even though we have themes related to the SUS scores, we don't have a granular view into how our JTBDs are performing within given stages.
The Solution
A usability benchmark process will be established that will focus on the following detailed metrics, for a given task:
- Completion rate
- Time on task
- Customer Effort Score (CES)
- Error count
- Severity
- UMUX Lite
- Grade / Overall score
The output from a usability benchmark study will provide the stage with a clear view into how the key tasks performed - along with, most importantly, recommendations to address any issues that were identified. Additionally, there will be calculated grades / scores, too. Once the issues have been addressed, retesting is done to understand how the measures changed as a result of the implemented fixes. This also results in a more accurate grade / overall score associated with a given JTBD.
Scope
The benchmark process will be highly detailed and not limited to a single stage. The intent is that this will be a repeatable process for any stage to adopt and implement.
A detailed handbook page will be created that justifies the approach with instructions. The necessary templates (ex: Sheets, final report sample, etc) will also be created in the form of a 'kit' for others to utilize.
Requirements
- The process must be proven to accurately measure the above measures.
- The output must be clear to our stakeholders and target audience (Product, Design, UXR).
- Any score calculations must be transparent in how they were derived and weighted.
- Measures need to be well justified - and correlated back to usability and/or SUS.
- A test environment solution must be in place for others to utilize when conducting a usability benchmark. This is mirror the experience and version that is the latest on SaaS.
- Q1 will be used to pilot the process. At the conclusion of Q1, iterations will already be in place.
Steps
-
2022-01-24: @leducmills Conduct 2 pilot usability sessions -
2022-01-31: @leducmills Land on a test environment solution for the pilot study -
2022-01-31: @leducmills Deliver Google Doc draft on testing environment process with .com group -
2022-02-01: @leducmills Begin testing sessions -
2022-02-16: @leducmills Complete testing sessions -
2022-02-17: @leducmills Deliver handbook page on the benchmarking process, detailing out measures -
2022-04-07: @leducmills Deliver handbook page outlining the cloud test environment solution with instructions for people to sign up and use it -
2022-04-07: @leducmills Develop templates / materials for a benchmarking 'kit' -
2022-04-12: @leducmills Conclude pilot study analysis and deliver report to UX and Create stage stakeholders -
2022-04-15: @leducmills Submit iterations for @asmolinski2 and @laurenevans to review -
2022-04-25: @leducmills Update handbook with iterate changes -
2022-04-30: @leducmills Communicate out the usability benchmarking process to UX and Product, including release of benchmarking kit materials