In-product survey on MRs performance
A (longitudinal) in-product survey to get a human-reported measure, inspired by the HaTS framework to assess how people subjectively experience the MRs performance. Otherwise we might end up with the situation of improving performance objectively but people still complain about it. Tracking user sentiment from the beginning helps to get a more holistic picture.
Ideal solution
Our ideal solution is to present a survey with 3 questions when people are visiting the MR page. This means that not just a banner inviting users to a survey, which is launched externally, is presented, but rather a banner-like view that renders the full survey, one question at a time (similar to Google’s HaTS). Presenting the survey in context increases the likelihood of users responding and the accuracy of data as they are not taken out of their task.
Sampling
The survey is prompted to a defined subset of SaaS users that are actively engaged with Merge Requests. There are six subsets and since users can belong to more than one, a mechanism needs to be in place to ensure that an individual user only receives one survey prompt. If a user dismisses the survey, they shouldn't be prompted again. For each day of the survey running, 3% of users from each subset should be presented with the survey after being on the MR page for 10 seconds. (Exact percentages and timing tbd)
Subsets (also referred to buckets in the comments below)
- MR high activity level: Users with at least 5 MR days a month
- MR low activity level: User with 4 or less MR days a month
- Paid subscription users: Users that have a paid subscription (not separated by a specific plan type)
- Free users: Users that don’t have a paid subscription
- Mature users: Users that have been active for more than 180 days
- New users: User that have been active for less than 180 days
Survey questions
We aim at understanding users’ subjective experience with the MR performance and their overall experience with MRs. We will also ask about their job title to align data collection with previous research efforts (and it’s a data point we can’t provide ourselves). Current question wording and order for this ideal solution:
- Overall, how satisfied or dissatisfied are you with merge requests, today?
- How satisfied or dissatisfied are you with the speed/performance of this merge request?
- What’s your current job title?
Nice to have vs must have
- Must have: The sampling mechanism as outlined above allows to stay in control of who see’s the survey and enables us to make adjustments to it based on response rates. It ensures we collect data from relevant user groups.
- Nice to have: Showing the full survey in context is a nice to have. If that's not feasible, we are confident that externally launching the survey from a banner would still be able to collect relevant data. It might take longer though and there is the risk that users drop out at the second question (which is the most important one).
Other solutions
There is a broadcast messages feature (notification type) available to source survey participants on SaaS. This option doesn’t require any engineering effort and has an established process outlined in the handbook.
However, it is less flexible and the message floats above the page in the bottom right corner of the screen (see example, which can decrease visibility since it blends with the MR sidebar). It doesn’t allow for more granular sampling beyond selecting a “target path” and we are unable to pipe in additional data, such as the subset characteristics.