FY23 Direction item for UXR: Establishing a repeatable usability benchmarking process
#### Background
Today, we don't have a view into how our *present-day* experiences are performing, from a usability perspective. While we do measure usability via System Usability Scale (SUS), that measure is at the system (aka 'product') level. This epic is designed to establish a repeatable process that allows stages to obtain a detailed view into how users perceive the usability of a given Jobs To Be Done (JTBD) within our stages, as they exist today. Additionally, there's a need to provide teams with specific issues to address that are tied closely with usability.
#### The Problem
GitLab has been measuring the System Usability Scale (SUS) score at the product experience level for many quarters now. While this is an effective approach to measuring the usability of a product, we have had some challenges when trying to identify what to address to ultimately improve the score. Our biggest problem with SUS is that, even though we have themes related to the SUS scores, we don't have a granular view into how our JTBDs are performing within given stages.
#### The Solution
A usability benchmark process will be established that will focus on the following detailed metrics, for a given task:
* Completion rate
* Time on task
* Customer Effort Score (CES)
* Error count
* Severity
* UMUX Lite
* Grade / Overall score
The output from a usability benchmark study will provide the stage with a clear view into how the key tasks performed - along with, most importantly, recommendations to address any issues that were identified. Additionally, there will be calculated grades / scores, too. Once the issues have been addressed, retesting is done to understand how the measures changed as a result of the implemented fixes. This also results in a more accurate grade / overall score associated with a given JTBD.
This is a substantial effort and will be part of the 2022 UXR Team Vision.
#### The Approach
This effort will be broken up, by quarter. The end goal will be a repeatable process for teams to conduct a usability benchmark. It's expected that the process will require iterations to fit efficiently within our working culture at GitLab.
- Q1: [Establish a repeatable usability benchmarking process](https://gitlab.com/gitlab-org/ux-research/-/issues/1755)
- Q2: Iterate; explore re-testing process
- Q3: TBD
- Q4: TBD
epic