Category Maturity Scorecard - Plan:Optimize FY24-Q3 - Value Stream Management
- Dovetail project: https://gitlab.dovetailapp.com/projects/2kFHYohqdgV8wfTLqRjeZO/readme
- CMS scoring sheet: https://docs.google.com/spreadsheets/d/1FODFCIKjbDKjKFa6_YAhuoMKZTUQkuWm2n_7cAfa7qA/edit#gid=1157931099
- Previous score and scorecard: n/a
-
Walkthrough:
📺 https://youtu.be/3Q7XszS0TPs - Recommendations: Experience Recommendations - Plan::Optimize FY2... (&11238)
Category Maturity Scorecard Checklist
✅ Getting started (Week 29):
-
Learn more about Category Maturity Scorecards. Review the Category Maturity Scorecard handbook page and follow the process as described. Reach out to the UX Researcher for your stage if you have questions. -
Create a new Dovetail project using the Category Maturity Scorecard Dovetail project template and add the link to the section above -
Create the test script (link) -
Create a copy of the CMS scoring sheet (link)
✅ Plan your research (Week 30):
-
Choose your JTBDs - Work with your PM on choosing which JTBDs to focus on -
Define and recruit users -
Write scenarios and them to your test script doc. -
Prepare test environment -
Document success/failure flows of your JTBD scenario(s)
✅ Post research (Week 31-32):
-
Document the research data and insights in a Dovetail project using the Category Maturity Scorecard Dovetail project template. -
Document the results of each JTBD scenario using the Dovetail template. -
If the participant has not granted permission to share the recording publicly, ensure the sharing settings are set to GitLab-only. -
If needed, create a recommendation issue for these sessions. -
Once the research has concluded, update the issue description Outcome
section with the maturity level. The outcome can be a downgrade, remain, or increase in x-maturity. For example,The CM Scorecard research has concluded and we have increased the maturity for Dependency Scanning to Complete.
Outcome
- Overall score:
C
,3.29
- Maturity:
Viable
- SUS:
62.7
-72.5
Viable
, the maturity definition for Viable
states:
Viable: Significant use at GitLab the company. CM Scorecard at least 3.14 for the job to be done (JTBD) when tested with internal users. No assessment of related jobs to be done. Suitable to replace the need for existing tools for new namespaces, projects, and environments.
We currently do not see: "Significant use at GitLab the company" but there is an upward trend in usage based on Sisense.
The following 5 themes emerged from this study (ordered by priority):
-
Getting started: (https://gitlab.com/gitlab-org/gitlab/-/issues/422795)
- Not all participants were sure they were the right persona for the study
- Users didn't understand the differences between the default vs. custom template options
- When creating a new value stream, users didn't quickly see the results of their actions, making them question whether they setup it up correctly
-
Terminology & mental models:
- Path bar abbreviations for time are unclear (gitlab#422313 (closed))
- Deployment frequency chart label: “Predicted values…” confusing (gitlab#423488 (closed))
- Definitions of Stages and stage events may not align with users mental models (ISSUE)
- Need better definitions around stage events, for example: Is “Backlog” considered a milestone? (ISSUE)
-
📺 See highlight reel →
-
Too much information causes confusion: (https://gitlab.com/gitlab-org/gitlab/-/issues/423492)
- Lifecycle metrics do now seem relevant to JTBD
- Total time chart vs. times in path bar, Is the user looking for Cumulative vs. granular data?
- Understanding whether the users intent is "optimize" vs "discover"
-
📺 See highlight reel →
-
Trusting the data: (https://gitlab.com/gitlab-org/gitlab/-/issues/423616)
- Is "backlog" considered a milestone?
- How can I see the affects of re-planning in the middle of a milestone?
- How are blocked issues factored in?
- How are very old issues possibly distorting the metrics?
-
📺 See highlight reel →
-
Organizational processes and adoption: (https://gitlab.com/gitlab-org/gitlab/-/issues/423617)
- Currently GitLab team members use other third-party analytics tools as the SSOT to track progress
- The majority of the participants had never used VSA before because they didn't need to
- How might we make it easier for GitLab team members to dogfood/adopt VSM?
-
📺 See highlight reel →
Study summary
- 2 personas were tested
- The CMS HB instructions suggests speaking with 5 participants per persona
- Since the maturity for Value Stream Management and DORA is
Minimal
, the scorecard was conducted with internal users - In total, 10 GitLab team members (5 PMs and 5 EMs) were spoken to
- 3 scenarios were tested
- Scenarios 1 & 2 were completed by all 10 participants
- Scenario 3 was specific to the DORA metrics and was only completed by the 5 EMs
- 0 / 10 of the participants spoken to use VSA or DORA on a regular basis
- Since there was only 1 scenario for the DORA metrics, we made the decision for this CMS to focus Value Stream Analytics and exclude the DORA metrics results
- Therefore, the overall score is a combination for both PMs and EMs
Other notes
- Addition validation from customer call - 2024.06.05 (internal)
Relevant links
See older description
- Category: Value Stream Management
- Previous maturity: Minimal (not validated)
- Revised maturity:
- Research data and insights:
- Recommendations: TBD
What’s this issue all about?
This is research issue for Category:Value Stream Management CM Scorecard.
What are the overarching goals for the research?
Move the category to Viable if scores aggregate to >3.14
Who is the target user of the feature?
- Dakota (Application Development Director)
- Erin (Application Development Executive)
- Delaney (Development Team Lead)
- Parker (Product Manager)
Edited by Libor Vanc