MR size and performance research synthesis
What's this issue all about? (Background and context)
“Dealing with large MRs” was ranked the number one problem that users have today with our code review features, in internal and external surveys. It's comes up frequently in issues, social media, and customer conversations. “Performance problems” is a common theme.
The large MRs epic provides a more complete context and historical background.
What are the overarching goals for the research?
Before we dive in to this problem, we need to build a complete picture of what we know today. By synthesizing the existing research and user feedback we will be able to spot high confidence and low confidence areas. Ultimately, this synthesis will allow us to know what to do next.
What hypotheses and/or assumptions do you have?
We believe that when users talk about “dealing with large MRs,” there are two separate problems: a slow user interface and the trouble of managing a big review.
What research questions are you trying to answer?
- What parts of this problem do we have high confidence in? And what parts do we have low confidence in and need to do additional research?
- For the high confidence parts, are there any actionable themes so we can start solving problems iteratively?
What persona, persona segment, or customer type experiences the problem most acutely?
What, if any, relevant prior research already exists?
What timescales do you have in mind for the research?
1 week.
Who will be leading the research?
Relevant links (opportunity canvas, discussion guide, notes, etc.)
-
Epic: 🐳 Support for large merge requests -
Insights from Code Review customers -
Internal code review process problems -
External code review process problems -
File-by-File (MR page & Diffs feedback) - User Interviews - October 2019 -
MR performance research - August 2020 -
Product Discovery for progressive merge request discussions -
UXR insights with “Large MR” -
Known MR issues - 2020