Define handbook guidance for running product beta programs
Summary
Tldr: We’ve run multiple beta programs (starting in the Package stage), but don’t have a clear, reusable handbook process. Let’s define a lightweight, stage-agnostic beta program framework (issues, feedback loops, metrics, and customer/field outreach), document it in the Product handbook, and link to it from the Package stage handbook.
Background
Over the last several milestones, the Package stage has run several beta programs (e.g., registries and metadata-related work), often with ad-hoc structures for tracking, feedback, and customer/field engagement.
The Product Development Flow handbook page already documents beta features and experiment/beta exit criteria (documentation expectations, UI labeling, and high-level exit conditions), but it doesn’t spell out an end-to-end operational process for running a beta program (e.g., how to structure issues, feedback collection, and coordination with Customer Success and field teams).
We should make that process explicit, reusable, and easy to find for any stage.
Problem
Today, beta programs across stages tend to be:
- Inconsistently structured: Some have a central tracking issue, some don’t; enrollment and feedback collection may live in comments or ad-hoc docs.
- Unclear on DRIs and roles: Who owns outreach, who owns feedback synthesis, and who owns deciding whether we exit beta can be ambiguous.
- Weakly instrumented: Success metrics, exit criteria, and telemetry are often implicit or scattered across issues, dashboards, and docs.
- Ad-hoc on customer and field outreach: Engagement with Customer Success, Sales, and Customer Programs varies by PM and program.
- Hard to learn from: There’s no standard place to look for how to run a beta, or to learn from past programs.
This makes it harder to run betas consistently, slows down onboarding of new PMs, and makes it harder for field teams and customers to understand what to expect.
Proposal
Create a handbook page and supporting templates that define a standard, minimal process for running product beta programs (initially seeded by Package stage practices, but intended to be stage-agnostic).
1. Handbook: “How to run a product beta program”
Add a new subsection under the Product Development Flow page, near the existing Beta features and Experiment and beta exit criteria sections:
- Working title: “Running product beta programs”
- Scope:
- Applies to all stages/groups running betas on GitLab.com, Dedicated, and/or Self-managed.
- Assumes feature-level maturity guidance (Experimental/Beta/GA) already exists; focuses on program operations.
Key elements to document:
- When to run a beta vs. go straight to GA
- Align with existing beta/exit criteria guidance.
- Examples of situations where a structured beta adds value (e.g., high-risk changes, large-scale migrations, net-new workflows).
- Standard artifacts every beta must have
- A Beta Program tracking issue (template).
- A Customer enrollment issue (template) where we track customers/accounts and CSM/SA owners.
- A Feedback issue (template) where we centralize qualitative and quantitative learnings.
- Roles and responsibilities
- PM / DRI: Owns program definition, success metrics, and exit decision.
- EM / Engineering: Owns technical readiness, rollout plans, and risk/guardrail definitions.
- UX / Research (where involved): Designs feedback collection and research plans.
- Customer Success / Sales / Customer Programs: Supports recruitment, onboarding, and field-side feedback.
- Support / Security / Reliability counterparts: Review supportability, security posture, and operational readiness for betas.
- High-level lifecycle
- Plan: Define scope, success metrics, exit criteria, and target customers.
- Recruit & onboard: Partner with field and customer programs, invite users, and confirm requirements (e.g., feature flags, versions).
- Run & iterate: Gather feedback, monitor metrics, fix critical issues, and adjust scope.
- Decide & communicate: Exit beta (to GA, iterate further, or cancel), and communicate outcomes to customers and internal teams.
2. Standard beta artifacts and templates
Create GitLab issue templates (likely in gitlab-com/Product and/or relevant stage projects) and reference them from the handbook page.
2.1 Beta Program tracking issue (template)
Purpose: A single source of truth for the beta program.
Suggested sections:
- Overview
- Goal / problem statement.
- Hypothesis: what we expect the beta to validate.
- In-scope and out-of-scope functionality.
- Target environment(s): GitLab.com, Dedicated, Self-managed.
- Success metrics & exit criteria
- Quantitative metrics (adoption/usage, performance, error rates).
- Qualitative metrics (minimum # of customer interviews or feedback submissions).
- Time-box for the beta and explicit exit options (proceed to GA, extend beta, or pivot/cancel).
- Target customers and segments
- Link to the enrollment issue.
- Personas / use cases we’re focusing on.
- Operational readiness
- Links to:
- Feature flag configuration and rollout plan.
- Documentation (including any “Beta” labeling).
- Support runbooks / known issues.
- Security / compliance notes where relevant.
- Links to:
- Customer & field engagement
- Channels used (e.g., Customer Programs, First Look, specific CSM/SA lists, Slack channels).
- Cadence for updates and check-ins.
- Risks & constraints
- Known risks and mitigations.
- Technical and business constraints.
- Timeline
- Key dates (recruitment start, earliest go-live, projected beta end, GA decision).
- Tasks checklist
- Opening, running, and closing tasks (similar to other Product templates) with DRIs and checkboxes.
2.2 Customer enrollment issue (template)
Purpose: Central place to track which customers are enrolled and who their internal owners are.
Suggested structure:
- Enrollment criteria
- Eligibility requirements (tier/plan, environment, deployment size, etc.).
- Customer list
- Table or checklist including:
- Customer / namespace / instance.
- Environment (GitLab.com / Dedicated / Self-managed + version).
- CSM / SA / AE owner.
- Status (invited / interested / active / inactive / exited).
- Notes (e.g., constraints, special asks).
- Table or checklist including:
- Outreach & enablement
- Links to outreach email templates or decks.
- Links to any Customer Programs issues involved in recruitment.
2.3 Feedback issue (template)
Purpose: Central place to aggregate and structure feedback.
Suggested structure:
- Summary and key insights
- Running summary of themes and key learnings.
- Feedback log
- Structured sections (or a table) for:
- Customer identifier.
- Source (CSM call, Support ticket, survey, GitLab issue, etc.).
- Category (bug, UX, docs, performance, configuration, feature gap).
- Severity / impact.
- Link to any follow-up issue(s).
- Structured sections (or a table) for:
- Actions
- Checklists linking to MRs/issues created from the feedback.
- Explicit callout of which items are must-fix before GA vs. can be iterated on after.
- Retrospective
- What worked well in the program.
- What we would change next time.
- Suggested updates to the handbook or templates.
3. Metrics and instrumentation expectations
Clarify that:
- Every beta program must define at least one adoption/usage metric and one quality/reliability metric at the start.
- Metrics should be instrumented (or at minimum, estimable) before recruitment begins.
- Beta exit decisions should explicitly reference:
- Whether metrics met, exceeded, or missed expectations.
- Whether we met qualitative feedback volume/coverage goals.
We can link these expectations back to the existing Experiment and beta exit criteria section so that product teams have a single conceptual model for readiness.
4. Customer, field, and support engagement
Document expectations for:
- Customer Programs / First Look:
- When and how to engage them to recruit or coordinate participants.
- Customer Success / Sales / Solutions Architects:
- How to invite them into the enrollment issue and communicate expectations (what they should say to customers, what they should bring back).
- Support / Security / Reliability:
- When to inform them of a new beta.
- How to handle support for Experimental vs. Beta vs. GA features.
- Any special handling for vulnerabilities in Experimental/Beta features.
5. Stage-specific extensions
While the core process should live in a cross-stage Product handbook location, each stage (starting with Package) can add a short, stage-specific section that:
- Links to the canonical “Running product beta programs” handbook page.
- Documents any stage-specific nuances (e.g., Package might call out self-managed vs. GitLab.com registries, migration workloads, or storage/performance considerations).
- Provides links to example beta program issues from that stage.
Proposed handbook placement
Primary location (canonical process):
- Product handbook → Product Development → Product Development Flow
- Add a subsection under or near “Beta features” / “Experiment and beta exit criteria” titled “Running product beta programs”.
- This is where we describe the process, roles, lifecycle, and templates.
Stage-specific link (Package):
- Product handbook → GitLab the Product → Categories → Package (Package stage handbook)
- Add a short “Beta programs” section that:
- States that Package follows the standard beta program process.
- Links to the canonical process page and relevant issue templates.
- Lists example Package beta programs as references.
- Add a short “Beta programs” section that:
Tasks
- Align on DRI(s) for authoring the new handbook content (likely Product leadership + a few PMs who have run betas recently).
- Draft the “Running product beta programs” section on the Product Development Flow page.
- Create or update issue templates:
- Beta Program tracking issue template.
- Customer enrollment issue template.
- Feedback issue template.
- Add a “Beta programs” section to the Package stage handbook that links to the canonical process and references example betas.
- Socialize the new process in
#product,#package, and relevant CS/field channels. - After the next beta using this framework, run a quick retro and update the handbook/templates based on learnings.