Skip to content

User voting on new features / controlled feature release

This is in relation to: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/7547#note_22849887

I think it would be useful to have a controlled way in which we can roll out a new feature / change. The most obvious way to do this would be use A/B testing, however, from what I understand, we're quite a long way from getting this working in Prometheus / Piwik. Therefore, I've been trying to think of a simpler solution which we can implement quickly.

I'm proposing the following:

1) A mechanism that allows us to gradually increase the % of users that can see a feature.

Like with A/B testing, where you can specify the % amount of traffic that you would like to serve a variant to.

2) Something visually that notifies the user that they are viewing a new feature and solicits feedback.

Feedback could be as simple as a user clicking 👍 to like a feature and 👎 to say they don't like the feature.

A count of +1 would be added in a database every time a user votes.

👍 gives us the confidence to roll the feature out to more users.

👎 would indicate that usability testing needs to be conducted on the feature to find out why users don't like it.

Giving feedback should be quick and easy, no more than 1 click!

No other data on the user would be collected - we're simply adding up anonymous votes.

3) A user never sees more than one voting mechanism.

Similar to A/B testing where a user should never see more than 1 A/B test.

This would lower the risk of the user getting confused about what feature they are voting for.

I also think it would be really annoying to get multiple notifications of new features. Once is enough.

4) Once a user has voted or dismissed the voting mechanism, they never see it again.

I know we already offer users the opportunity to vote on issues, but seeing a design to actually interacting with something is quite different. Also, we may reach more users who don't actively vote/comment on issues.

The overall goal would be to get points 1-4 working together.

A boring solution, and to test whether users would interact with the voting mechanism, is to just implement points 2 and 4 for now.

We could test a new feature we feel fairly confident about (eliminating point 1)

We could test just one feature at a time (eliminating point 3, whilst also saving the additional functionality which would be required in the database to identify what feature a user has been served and what they are voting on).

@awhildy @tauriedavis - This is a very rough idea for collecting some quantitative data, what do you think? Can you think of any potential problems?