The department OKR gitlab-com/www-gitlab-com#6201 (closed) intends to validate category maturity ratings for categories that have moved to the next rating during Q4 FY20. No category in the Dev section has moved to the next rating in Q4FY20 (none are included in the main department OKR), so the idea here is to validate 2 categories that are currently Complete/Lovable (per our team chat).
Current categories for Dev that fit that description:
@mike_long I made the description a bit more complete. In any case, we should try to select the 2 categories based on some aspect (like reach or impact).
@mvanremmerden thoughts? I think this is what you were also suggesting in Slack.
No category in the Dev section is part of that department OKR, so the idea here is to split the Dev designers to validate 2 categories: one more mature Complete/Lovable, and one less mature Minimal
In the department OKR we intentionally avoided Minimal, which makes sense to me, as there is only very litte insight to be gained in my opinion. Maybe instead we could have a look at one lovable group, and one that is supposed to gain maturity in the next Quarter (e.g. Epics, Roadmaps, Web IDE or Design Management).
Another dimension might be “business impact”. What if we asked for product leadership’s help in selecting two categories?
@pedroms@mvanremmerden By the end of this month we'll have a clear methodology for how to interview/observe participants and make sure we get the data we need for validation. @sarahj is currently working on this as part of the UX Research OKR
I was talking with @npost about it and he has some ideas about how to facilitate a prioritization conversation in Mural so we can visualize the decision-making process and efficiently reach a shared understanding.
@npost@pedroms Could the two of you get that issue and Mural board rolling? I recommend we initially ask Eric Brinkman which two categories—from the Complete/Lovable listed in the issue description—we should focus on, and then loop in relevant Dev section PMs after we've selected a couple of categories to research.
I made the first draft of the 2x2 we discussed. The idea is that a category that's high risk and low confidence would be a candidate for validation with end users via Category Maturity Scorecard.
Makes sense to me. Only things we could consider adding/changing are...
Some indicator of existing scorecard work being done before for each category in consideration
The risk dimension could potentially be swapped out for strategic impact (or we could add strategic impact as an additional dimension in terms of size of sticky note S/M/L)
Maybe some additional space for comments in the context of where a sticky note is placed on the 2x2
Some indicator of existing scorecard work being done before for each category in consideration
I wonder if that's worth noting? It's my understanding that the existing scorecards were done by a GitLab team member, not with a user's participation.
The risk dimension could potentially be swapped out for strategic impact (or we could add strategic impact as an additional dimension in terms of size of sticky note S/M/L)
I like that idea! I think strategic impact replacing risk keeps things simple.
Maybe some additional space for comments in the context of where a sticky note is placed on the 2x2
Instead of commenting with Mural's comments feature? I like that idea, and there's definitely space on the 2x2.
@mikelong@ebrinkman: I sat down with @npost this morning to discuss what this research plan might look like, and I'm not sure that VSM is the best candidate for this particular research OKR. We struggled a bit to come up with a research plan that made sense.
Nick already led on an extensive research spike in 12.5 that led to a number of Analytics insights. We've used this extensively to inform our roadmap.
We haven't gotten far enough on our Analytics strategy to think that there'd be a significant change in user sentiment since 12.5. We've shipped Code Review Analytics and are in the process of shipping group-level activity and customizable VSA, but we haven't gotten these to market yet.
I also would prefer that @npost stay focused on our roadmap; we're not far removed from just-in-time design mode, and we've also got a new PM getting set to join next week.
When would you like to revisit the maturity rating and validate it with end users?
We'll have made substantive progress in ~2 milestones, so I'd like to conduct this at the tail end of May. If we're considering a similar OKR for Q2, I absolutely think this would be a great candidate.
At the moment with VSA, in order to gather appropriate category maturity feedback, we would need to either:
Reduce the scope of our JTBD down to something unrealistic such as "Understand how long my value stream takes"
Use a fully-scoped JTBD and get useless feedback
B. Harder to source director-level participants
We also need to keep in mind that sourcing Engineering Directors as research participants is much harder than getting our standard developers. Therefore, we need to be more strategic with our research participants.
C. Not the right time to leverage the methodology
As mentioned by @jeffcrow: "This process is meant to be completed only when a team believes they have gone up in maturity." I think we are all confident that VSA is still at maturityminimal and hasn't changed.
Recommendations
I. Go lightweight or wait a couple of months
If we were desperate for some level of category validation on VSA right now, I think we would benefit from something lighter weight such as a survey (in which we could combine problem validation + solution validation). If we wanted to test out the full-fat category maturity process, I agree with @jeremy that we'd be in a better position in a couple of months time.
II. Problem validation on VSA
I'd also add that our research spike was more persona-focused (on Engineering Directors) than value stream focused. We would certainly benefit with more problem validation around VSA and a deep-dive on one of its stages like Plan or Create. Especially because Engineering Directors are not necessarily the primary persona for this category as demonstrated by their low interest in this topic. This would help to generate and refine the list of JTBD that we would want to validate in our category maturity further down the line.
@npost: Thank you for articulating this so clearly. It sounds like a category maturity validation activity will be more appropriate when we want to test our belief that we're moving from minimal to viable, after a couple of milestones.
@mikelong When it comes to user validation, one challenge is that this whole product category is itself quite young with vendors generally ahead of the market and relatively slow customer adoption across all vendors. In that environment the results of customer input--which I absolutely think is valuable--will be especially sensitive to a very uneven distribution of customer awareness, expertise and point-of-view as well as to how we articulate our POV on the market area and position the offering. It will be quite easy to intentionally or unintentionally skew the results. This argues for us to weigh our own vision and positioning heavily in the decision making around this and to be realistic with ourselves and transparent with others about what that customer data really means so as not overstate its voice in summarizing market sentiment. We also should pay extra close attention to the methodology we use here, specific customers giving input, etc.
When it comes to user validation, one challenge is that this whole product category is itself quite young with vendors generally ahead of the market and relatively slow customer adoption across all vendors.
I agree. The fact that an audience checks in on value stream metrics implies they have an agenda of continuous improvement. It requires a level of maturity--or at least a working knowledge--in order to respond to the data they're receiving.
I believe we have to meet people where they are by presenting a capability to them that feels familiar and exciting... aspirational instead of "powerful". As the team progresses on the roadmap, testing solution ideas with appropriate audiences will help keep things on track.
@uhlexsis@kokeefe I'm wondering if you might run into the same situation as Nick and Jeremy, or if plan:roadmaps might be in a different place and ready to validate category maturity. Thoughts?
@mikelong Roadmaps are currently at minimal and we haven't had the bandwidth to improve them much recently. Here is what I currently have tracked for us to reach Viable(using previous maturity frameworks). JTBD for roadmaps is being mapped currently (as I have bandwidth) so I think we could have a solid state for those soon.
It may be more beneficial to leverage this against a category that has matured recently?
@kokeefe I still think it would be useful to validate the existing roadmap maturity as well as what you're tracking for Viable. Perhaps customers are using roadmaps more than we know and it's solving problems that we aren't aware of?
Mike Longchanged the descriptionCompare with previous version
changed the description
Mike Longchanged title from FY21-Q1 UX Quality Product OKR (Dev Section) to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology
changed title from FY21-Q1 UX Quality Product OKR (Dev Section) to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology
Mike Longchanged title from FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 50%
changed title from FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 50%
Mike Longchanged title from FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 50% to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 0%
changed title from FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 50% to FY21-Q1 UX Quality Product OKR (Dev Section) — Prioritize two categories to validate with Category Maturity Scorecard methodology => 0%