Update & clarify the Growth experimentation process
Why is this change being made?
It’s been quite a while since thesection was created and began the exploratory work of creating an experiment process & trying out different experimentation tools. It feels like we’re now getting to a crunch point where we want a well defined & standardized process for how we think about, define, and run experiments in GitLab (across .com, Customers app, & even self-managed).
Beyond that, GitLab the company recently adjusted focus from creating new features to improving existing features & performance of GitLab the product. As such, and as @nicolasdular recently mentioned in a Growth Eng. Weekly, it seems like right now is a good time “for us as a Growth team to support the rest of the company if they want to run experiments.”
We have a lot of experience, now, with running experiments and using a few different tools to do so. We have, perhaps, some individual group processes or even individual PM processes that exist (maybe they’re just habitual and not even written down) that we can coalesce, condense, & standardize into an Experimentation Process that works not only for Growth, but for all product development sections.
This idea was also brought up during a recent Growth Weekly meeting.
A Note About Scope
As much as possible, let’s try to keep the scope of this MR to the process of creating, defining, & running experiments, and try to refrain from framing any discussions in terms of tooling (primarily any tools used to actually run the experiments, such as Flipper, Unleash, Launch Darkly, Scientist, etc.). Including known & required tools as part of our process (such as GitLab MRs, issues, epics, & labels, or Slack notifications) is unavoidable as those tools are very much part of how we do everything at GitLab.
Define the experimentation process
- how we use labels & what labels we use
- how we progress issues through a workflow (see the experimentation tracking board)
- what are the different stages?
- who is responsible for each stage and for progressing the experiment to the next stage?
- how we create/define experiments themselves (what makes a good experimentation definition/outline?)
- and who defines them? Should initially be PM, engineering/UX can chime in
- when does engineering need to add the necessary database tables/columns for an experiment vs. when the data team can begin trying to hook those up to Snowplow? (i.e. can they try to ingest tables/columns that do not yet exist?)
- Keep in mind that some experiments will be specifically for self-managed customers and will use usage ping metrics rather than snowplow stuff
- Need to navigate hypotheses for features (e.g. We expect 1% of users to be using this feature within 1 year – if that doesn’t happen, let’s remove this feature)
Should we create an epic and have part of it deal with the process and part of it deal with the tooling?
- Yes. I believe we should.
- Provided a concise title for the MR
- Added a description to this MR explaining the reasons for the proposed change, per [say-why-not-just-what][transparency]
Assign this change to the correct DRI
- If the DRI for the page/s being updated isn’t immediately clear, then assign it to your manager.
- If your manager does not have merge rights, please ask someone to merge it AFTER it has been approved by your manager in [#mr-buddies][mr-buddies-slack].
- If the changes relate to any part of the project other than updates to content and/or data files please make sure to ping
@gl-static-site-editorin a comment for a review and merge. For example changes to
Addresses #9050 (closed)