@jeffcrow This should be part of off boarding to @alasch which I believe you do have covered already. It actually needs to be done next week, so good timing...
@jheimbuck_gl - We'd talked about renaming the feature Visual Reviews for the purposes of this Survey. Can we do that in features.yaml this week so something that won't confuse survey participants into thinking it is Review Apps?
Thanks for the reminder @kencjohnston. What do you think about "Review App Comments" to clarify this is NOT review apps but a feature that they can use (as could other environments).
Comment directly on Review Apps or Review App comments both seem like viable options if Visual Reviews isn't the approach right now @jheimbuck_gl@kencjohnston. Although I do like the sound and mental imagery that comes with "Visual Reviews," it would need a little more context or a way to limit confusion with Review Apps, agreed
@alasch Thanks! Doesn't seem like there are any changes from any other product leads so if the latest feature lists match our SSOT page when you look it over, please let me know and I'll finish my tasks to close this one out.
@justinfarris@joshlambert@david@kencjohnston I went through the SSOT handbook page and compared it against the feature list in the survey. I noticed that there are a few duplicates where features are listed two or three times within a stage or across stages. There were also a few features missing in the survey list which I added now. Can you please review the duplicates outlined here and let me know if this is intentional, in which case the features should be named differently to reflect that they are different. If not, please update the handbook page and remove the duplicates from there. Thank you!
@alasch Thanks for rallying us around this! FYI both @david and @joshua (whom I'd DM'd on Slack) said the SSOT page looked good... Not sure if any of the duplicates you've found are in their area....
@alasch Yup, @kencjohnston is correct, we purposely do not categorize the survey list in any way so as to not bias the responders, and because some features span categories. We simply want to show the list the differentiates each tier from the other so we can understand what features motivate new purchases (and eventually, upgrades, etc.) If you want the lists to be more digestible, given that they're long, we can put them in alphabetical order perhaps?
@fseifoddini Why do you think categorizing the items would introduce bias? The order of the categories and list items (ideally) would be randomized, provided Qualtrics has that capability.
Note that an alphabetical would introduce bias; as people tend to focus on the top and bottom of longer lists. If alphabetical, the list would always remain in the same order and never randomized.
@asmolinski2 The features should be in no special groupings so as to not bias what responders focus on, we want them to think of and/or see the feature that motivated them in the list with as little guidance as possible. So I totally agree that randomized is best, a straight list, and no other organizing principle.
We currently have a good response rate on this survey (over 7%) so I'm not sure what problem we're trying to solve by changing how we present the list right, but am always open to improvement! I thought youre concern was that as users think of a feature name, we want them to easily find it on the list, so I suggested alpha order, but I agree with you that randomized is better. So I say we do randomized, in the current format, and leave it there for now.
If UXR would like to propose other ways of organizing a list, I recommend we raise an issue providing other options and how it could benefit us. Then we could test those out, compare response rates to the current survey and take it from there!
@fseifoddini I'm not that concerned about response rate; it's the validity of the answers I'm worried about. Providing a lengthly list with items in no particular order makes it difficult for people to find the items they want. At some point they may give up and select whatever rather than reading/scanning through that entire list.
Unfortunately, it's difficult to measure that. It'd probably require an interview follow-up to understand if what their selected items matches with what they verbally tell us - and look for differences, which may or may not already exist in that list. It's a high degree of effort to get there and I don't have concrete evidence to suggest the data we're getting is invalid. I just know that from my experience designing surveys, I typically aim for a list of 10-12 items max to make it easier to parse.
@asmolinski2 Yup, I get it. I'm good with ya'll leading an effort to better how we present the list and raising a proposal issue as appropriate. For now, @alasch And I will keep the current format, which presents the list in no particular order and not categorized in any way.
I am concerned about the response rate so I'd want to test out new formats to assess that before we fully change it up.
@justinfarris Weird. I don't see it as a duplicate either anymore on that page. Never mind and thanks for checking. The sheet is updated and doesn't show it as a duplicate anymore.