Product Managers need to prioritize which issues should be worked on first. One way to look at that is to examine the value delivered by the feature vs the cost to deliver the feature. Aka bang-for-buck.
It's hard to accurately determine value and cost. One proxy for value can be how much interest an issue has garnered, as measured by thumbs-up () emojis applied to an issue. Cost can be determined by the weight assigned to an issue. While these aren't perfectly accurate, they could be sufficient for some kinds of planning. Further iterations can improve on the measures, but we can start by this simple MVC. Since those two mechanisms already exists, we don't need an interface to add cost and value determinations.
Proposal
Add a new view to the Issue menu named Value vs Cost.
The page renders issues graphed with cost (weight) on the X axis and value (thumbsup) on the Y axis.
It may be necessary to filter the issues list so that the display is a manageable subset. It would certainly be necessary for the gitlab project, but other projects may not have that many issues, so we should start without filtering.
It's common for a view like to divide the graph into 2 or 3 sections, for high/medium/low values of cost and complexity. For simplicity, we could simply list issues in each quadrant rather than placing them exactly according to their measures.
Similar tools allow dragging and dropping issues between the various quadrants, but that is explicitly out of scope for this MVC.
Permissions and Security
Anyone with permission to view issues should be able to see this view.
Documentation
Availability & Testing
What does success look like, and how can we measure that?
Usage of this view should exceed 4% when compared to usage of issue boards within 2 months.
Usage of GitLab by Product Managers should increase.
What is the type of buyer?
Since this is generally spanning multiple teams (engineering and product), it the buyer is likely at least a Director level.
@markpundsack this is essentially the eisenhower matrix/method. My thoughts here are that this could be made more powerful by making it more flexible. This way other variables could be represented than just value vs cost. Think of impact vs effort, or immediacy vs feasibility, etc.
In design thinking exercises often these can matrixes even be overlapped like a heatmap to spot good opportunity items across multiple exercises.
An idea here would be to use labels as well with floating-point numbers which will change depending on how they are dragged across these surfaces.
For example matrix::immediacy::0.76 and matrix::feasibility::0.34
Often these matrixes are displayed in the following way as well
This seems like a pretty good solution for some of the prioritization problems I face with teams, particularly with multiple PM's on our team is weighting in.
Would love to see more Product Management tools integrated into our teams git workflow. Most of our PM's are highly technical, and as such already spend a lot of time in our Git-repositories. By moving some of the communication/planning into here, that could reduce some of the back and fourth from our Notion documentation to our repository.
@brentmartin Thanks for the feedback! That hits it exactly. PMs are already using GitLab, but we can bring them even closer and make communication between teams easier by bringing it all into one platform.
One possible problem is the percentage of weighted issues.
As of last week, of the top 100 issues 21 were weighted. Of the top 1000, 85 were weighted. And of the top 10000, 678 were weighted.
Can we find proxy measures for cost to get to a somewhat "automatic" weighting? Number of comments, length of discussion, number of MRs and related issues etc?
As for upvotes, they are a useful measure of "value" for us, but most customers don't have a public issue tracker were customers can upvote issues. And while a PM can just adjust the weight, they cannot just add an arbitrary number of upvotes.
Can we find a way for PMs to input the value manually? Can we find proxy measures for value as well?
Weighting is definitely a challenge if your group isn't already using them. Or, for example, only adding weights when items are entering a sprint, but no weights for the backlog. But, that doesn't mean all is lost.
1. Some teams use them extensively, so it could work for them immediately.
For planning, you could add weights to a subset of issues specifically for the planning process. Like you're only going to apply the exercise to a bunch of "big bets", not every issue in the tracker. That implies we might need an issue filter based on label in the MVC to be useful.
If the view turns out to be useful, people may change their behavior to make it even more useful. e.g. maybe people will start weighing backlog items too...
Automatic proxy is an interesting idea, but I would punt on that for post-MVC. Actually, maybe punt on it forever. I think a better path would be to allow people to adjust the "cost" in realtime in the interface. e.g. drag-and-drop to the quadrant you believe it falls in. That's how I've done it in the past. But that for sure it post-MVC.
Totally agree on upvote not being a perfect measure, especially for us where customer votes aren't tallied in public. But, again, this is an MVC. Adjustments could come later.
@prakash1003 Awesome, thanks! It hasn't been broken into smaller tasks yet.
I'm thinking the first view is really simple, the 9x9 grid. We might need to be able to filter the issues, but not for the very first iteration.
We could use hard-coded limits for each quadrant value. Like 10 upvotes makes it medium and 50 upvotes makes it high.
Similarly, weight should be hard-coded at first. Like 0-3 is low, 4-6 is medium, and 7-10 is high. I know that won't work for everyone, but the first test is to validate the direction, not the specifics.
We could start by graphing just a dot for each issue, like in the Jupyter prototype, or listing out each issue in text. Whatever's easiest. In the past, I've always worked with text in the boxes, but the Jupyter prototype above could be just as effective for a first iteration. You won't be able to see what each issue is, but it's an easy click to open them in other tabs. Since the info is read-only, it still works. When we move to read-write, it'll be more important to see individual issue content. Then again, maybe that even works hover content. The real reason for having all text in the boxes is when showing the results to others, so they can see all the issues at a glance.
I don't think it's super important to have exact placement within the squares. That's why I think a 3x3 can work fine. Whether your items is 1.1 or 1.2 isn't nearly as important as whether it's low/medium/high.
Flexibility will likely be important, but later. People will want to be able to rename the axes. Bang-for-buck, cost-vs-value, risk-vs-reward, etc. But let's pick one for now. My suggestion is value vs cost.
Likewise, people will want different determinations of cost and value. Let's punt on that for now.
Eventually, we'll want to easily allow people to change the cost and value. That gets harder if you're looking at floating point data underneath, but is really easy if it's simple labels for high/medium/low. The eventual view may not even use thumbs-up. Or maybe it's two completely different flows. Use the thumbs-up for inspiration, but then have a different interactive view to actually do your planning. Or a single view that seeds the data from upvote and weights, and then lets the PM and team override those values, with the net result being stored as scoped variables. e.g. cost::high, value::low.
FWIW, in a previous life when we did this, all issues were just in a list and had to be manually drag-and-dropped into the quadrants. Any automatic data from thumbs-up and weights would be pure bonus.
A scatter plot, like in the Jupyter prototype, with dots for issues. This might be the easiest thing to implement since you don't need to translate the data at all, just graph it. The downside is if you try to put the issue titles in the graph, it'll probably be too cluttered, so just like the prototype, leave the labels off.
A 2x2 or 3x3 where issues are bucketed and displayed as lists in each quadrant.
I'm fine either way. The goal is to get an MVC out as quickly as possible and start playing with it. Put it behind a feature flag and actually get it into production.
Thinking a bit more about it, the decision on which way to go depends on fitness for use. What is the use case of this particular view for the PMs? You could ask your team for their needs and could decide on where to go based on that.
A scatter plot like in the Jupyter prototype provides an "at a glance" overview to detect particular outliers in value vs cost. I might not even need a label on the issue because what I want to do it is "skim off" the top of value markers anyway and just see how a number of issues compare. For this overview, the plot works very well. Issue filtering would definitely be needed so the plot probably only works in conjunction with an issue list that allows searching filtering to not get too busy:
If PMs want an easy way to prioritize, the most important ones are big ones and low-hanging fruit. At its most basic, this could just be another sort attribute in the issue list. Call it priority score or something similar, or break it up in two attributes:
In the same vein, this could just enhance the issue list with a calculated value. If the individual matrix quadrants are categories we assign to issues, this is easy to represent in a filterable, sortable table:
Priority score here is just upvotes + max weight / weight * upvotes.
Scoped labels also could work. Using a bot to assign the value automatically would be possible today, however we cannot yet add custom attributes to issues and a score would necessarily be a numeric that cannot be represented using labels well.
If it is supposed to be a conversational and/or interactive tool, it could be a specific version of a board ("PM board"). The columns are defined by cost (either by manual labels or using weights automatically). There are swimlanes for value. Inside each column+lane, there is automatic sorting by the value/upvote attribute. This instantly enables the manual drag and drop functionality in a familiar environment with support by automatic classification / sorting based on existing attributes.
I think it's pretty important that the result is visual. A list just doesn't have the same impact.
The boring solution is to see what's already used by PMs and simply implement that view. I've used the interactive 3x3 before, so I'm biased towards that, but I understand other PM orgs have variations.
In my usage, the cost and value components weren't taken as facts, they were debated and adjusted by the team, so having it interactive was key.
Because we've started with upvotes and weights, it took us in a different direction. Both are valuable, but different value. As crazy as that scatterplot looks, it has value. It might be more niche value though, since a list sorted by thumbs-up may be just as informative. The plot visualization helps people to comprehend the list better.
Hey @markpundsack, I may currently be unable to get started on this work. I am going to closely monitor this Issue though and contribute as soon as I free up my schedule. Sorry about this, just wanted to give you a quick update!
@markpundsack this is a great idea. I'm going to explore this a bit further and do an experiment with GitLab + Sisense but if the outcome of the experiment is useful, we'll look at fast tracking this (and probably other prioritization models like WSJF) into GitLab proper.
@gweaver: I'd like Analytics to consider prioritizing this issue for an MVC. This is on-theme with the group's focus on dogfooding and could be a big help with a standard view on "importance" (we currently have at least 5 different versions of the same "priority" labels in gitlab-org) and helping PMs prioritize.
It would be extremely cool to eventually embed this in the handbook filtered for different groups, too.
@npost: What do you think about us possibly prioritizing an MVC here for %13.4?
@npost It may work in the Enterprise DevOps Reports category but I think it falls more between Planning Analytics and Value Stream Management. This also works towards solutions for several of Plan's core JTBD:
When working through a list of ideas, I need to be able to validate that an opportunity is valuable and worth funding a solution, so I can deliver value to my customers and drive success for the org.
When working through a list of validated opportunities, I need to be able to understand effort and map ROI for each, so I can prioritize the right sequence of work.
When creating or revising an initiative's operational plan, I need to allocate resources to it so it is clear what the projected costs will be and what capacity is available to complete work on an ongoing basis.
When conducting Product Increment or Sprint planning, my team needs to convert prioritized initiatives and their business objectives into estimate-able work items to be delivered, so that we can sequence work according to available capacity and relative ROI.
These are just the "frontside" of the planning cycle. We could eventually extend this to treat the initial prioritization as a hypothesis and then layer in the actual results into the issue/epic which would solve for another 1/3 of our core JTBD.
Random thoughts (note these all don't have to be in the MVC):
I should be able to adjust both the value and cost dimensions manually. Dragging individual issues would be nice, but I'm OK with adjusting manual values in the first iteration.
I'll want to filter the view for specific labels so I'm able to see a subset of issues.
The matrix needs to consider how we handle lots of issues.
We can help with the "lots of issues" problem by not displaying issues without a weight or value.
Do we do this for epics too? Maybe a future Ultimate iteration allows me to do this for epics, so I can prioritize different initiatives. Click into an epic from that view to drill into the value/cost for child issues.
I think weight works well for cost, but I don't think that upvotes always works well for value. It's a great default starting point, though. I'd just want to make sure we can manually adjust the value.
I'd want to be able to filter the issue list for the value/cost dimensions somehow. I can see myself wanting to find high value issues by trying to search for issues where value > 0.75 or something similar. We can already filter by weight, but I don't think we can do greater/less than.
I like the value/feasibility approach (#202041 (comment 282667538)) because up-and-to-the-right should always be most desirable, unless it's more confusing to do it differently.
I should be able to adjust both the value and cost dimensions manually. Dragging individual issues would be nice, but I'm OK with adjusting manual values in the first iteration.
There's a few concepts here:
Individual editing
Bulk editing
Bulk analysis
I think weight works well for cost, but I don't think that upvotes always works well for value. It's a great default starting point, though. I'd just want to make sure we can manually adjust the value.
Agreed - although I think upvotes (especially if sourced from stakeholders/customers) are a useful signal - maybe this metric could be reframed to something like "popularity" rather than value to be more specific in earlier iterations. Alternatively, you could call it something like "Estimated value".
Any issues that have then been valued and prioritised could eventually be auto-allocated from a backlog into @gweaver@hollyreynolds's iterations.
@npost@jeremy I think it would be awesome if you wanted to prioritize this. I would encourage us to think about dogfooding but also establishing a longer term vision around prioritization that would provide maximum value to the wider community based on common frameworks that teams are already using and doing our best to iterate on our existing features. So I'm going to go on a slight tangent to provide all the context in my head so we can work towards that ideal vision and our first MVC as efficiently as possible.
Frameworks & Current Demand
Out of the 9 most popular, we've had requests to add support for:
I think all of these prioritization frameworks struggle from ambiguity around determining what the value is for any given piece of work. Solving for that is going to be a long term challenge but it is one of the largest opportunities I picked up on when speaking with with the several analysts from Gartner regarding the biggest challenges enterprises are currently facing and how they are thinking about value stream management.
Given that, I believe the best framework to start with is Cost of Delay because:
It would allow us to support WSJF, which is the primary prioritization mechanism used in SAFe. SAFe is also the defacto scaled agile framework with adoption rates speeding up, not slowing down.
It allows you to quantify backlog in terms of money, helps PMs make better decisions based on the value that matters most to the company, and changes the team's mindset around features from cost/efficiency to speed/value.
One of the better learning resources I've found for understanding CoD is here. The thing that works well about their approach is separating value from urgency. By additionally layering on Duration (weight), you can generate a more total value in a given time for a scarce or constrained capacity.
Implementation ideas
Most organizations will prioritize at the epic level, not the issue level. I think we should support both and if we can only pick one, I'd pick epics -- especially as we're about to kick-off working on Epic Boards and we will have epic swimlanes on Boards in 13.3 where we could expose some of the sorting/ordering based on priority.
We already have priority labels. I think we should either build on top of them or migrate them into a first class prioritization experience within GitLab. It's going to get very confusing if we layer on another prioritization framework ontop of the already existing label priority and priority sort options.
There is a broad spectrum for Cost of Delay approaches. Ideally, we would work towards Quantified Cost of Delay
I would encourage us not to use the award emojis as a basis for anything because they are most relevant to FOSS communities whereas enterprises do a lot of top down planning based on business value -- which can't really be captured on an issue currently. It would be locally optimizing for GitLab if we did rely on them.
A few ideas below...
Qualitative: Extend "Priority Labels" to support generic priority matrixing
Instead of having a single list of "priority labels", we allow users to configure prioritized labels on a matrix. It could be flexible to support 1x1, 1x4, 2x3, etc. We would want to put some constraints on it, but at the end of the day, we can plot issues on the matrix. When clicking on a quadrant, we could show the list of issues within that quadrant.
We could then layer in weight as the duration so the formula would be something like:
CD3 = (xAxis [1,2,3] x yAxis [1,2,3]) / weight
Quantitative: Add base "prioritization" model with "types" where Cost of Delay Divided By Duration is the first type
We could then layer in weight as the duration so the formula would be something like:
CD3 = (value x urgency) / weight
This option would allow us to get straight to using money and make it possible to integrate additional prioritization framework types and allow the end user to select which of the 9 most popular they want to leverage (or even better...mix and match).
The additional benefit of adding the profile types would give us the ability to further categorize priority depending upon what the most important outcomes for the business would be.
Dogfooding
For GitLab to dogfood CD3 effectively, we need to calculate CD3 in aggregate. If we want to use truly quantifiable values, then we would need to take into account each customer opportunity we have, the monetary value of it, and split that value amongst the most important/urgent requests. Customer A's value and urgency are likely to be different than Customer B. I'm not suggesting we solve for this within the product at first...as we can play around with different aggregation approaches in sisense/snowflake and update the value field programmatically via APIs -- but it is something to consider.
@gweaver: Thanks a lot. I'll rely on @npost here to drive this; Nick, thanks in advance for working with Gabe here in the working group.
I like the idea of using CoD. I wonder if we can iterate toward this and start with a single dimension like Value and add in Urgency later. Most of what you mentioned seems possible and valuable with just value and weight (cost).
I would also rather lean away from upvotes/emoji than toward it. This is helpful at GitLab, but it's a popularity measure vs. a value/urgency indicator, and enterprise-scale GitLab instances probably don't use it much if at all.
You mentioning Benefit Type reminded me to create #234000.
Leveraging this at the Epic level allows us to support planning paths common to orgs leveraging Plan, and allows us to dogfood both epic and a framework for better reporting and communication.