Package: Solution Validation based on Experience Baseline JTBD

What’s this issue all about?

We would like to understand the current experience of the Package product from the user's perspective. As a foundation, I'd like to evaluate a selection of JTBD used for the experience baseline, highlighting recent improvements for validation.

Important Dates

  • Dec 8 - Put in recruiting request issue to Emily
  • Dec 18-20 - Discussion guide draft (as JTBD will be good to go by 12/18)
  • Jan 2-3 - Discussion guide finalized
  • Jan 3 - Recruiting starts via emails to past participants
  • Jan 3 - 8 - -- Scheduling participants
  • Jan 6-10 -- Sessions (if you schedule them on fewer days, you can start the analysis faster ;) )
  • Jan 12 - 17 -- Data analysis and report

What questions are you trying to answer?

What is the current experience of using the Package UI with GitLab?

  • Can users successfully perform the selected JTBD tasks?
  • Where are users getting stuck or blocked?
  • Are there expectations from the user we're not meeting?

As an aside, I would also like to evaluate how the JTBD stands up to validation and how they can be refined with research.

What assumptions do you have?

  • Users will be able to accomplish most of the tasks as they are reasonably straight forward.
  • The experience baselines contain specific assumptions I might want to test.

What decisions will you make based on the research findings?

  • What changes should be made to the JTBDs in order to have a more accurate Experience Baseline
  • What changes or enhancements should be prioritized next for Package

What's the latest milestone that the research will still be useful to you?

This will probably last 12.8+, this research is the package team's first solution validation activity.

What is the study called in UsabilityHub?

TBD

Edited by Iain Camacho