Skip to content

Create feature taxonomy and enable granular usage tracking

Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.

Context

This issue is a follow-up from the Core DevOps Instrumentation Audit initiative (gitlab-org/gitlab#547656), which revealed important gaps in our feature taxonomy and instrumentation framework.

Problem Statement

The Core DevOps instrumentation audit identified two interconnected problems that prevent us from understanding and actively monitoring feature adoption and value:

Lack of Engineering-Focused Feature Taxonomy

While features.yml exists, it fails to meet engineering instrumentation needs:

  • Not up-to-date with current features
  • Does not align with engineering expectations for instrumentation (related thread)
  • Possibly optimized for marketing terminology rather than technical implementation (related comment)

Example from audit: For Create:Source Code, MR approval rules are listed 4 times with different names, while core features like repository viewing (blame, history, locking) are missing entirely.

Missing Feature-Level Instrumentation Granularity

Our current instrumentation framework lacks the ability to track individual feature usage:

  • Metrics link only to groups and broad feature categories (which may contain dozens of features), not specific features
  • Identifying feature usage requires manual interpretation of metric names or reviewing the code to understand what's being instrumented

Example from audit: The Package Registry category contains 11+ distinct features, but metrics can only be attributed to "Package Registry" as a whole, making it impossible to programatically and univocally determine e.g. which package types drive usage.

Impact

Data Quality & Trust

  • Teams implement instrumentation without enough granularity and feature attribution
  • Stakeholders cannot make data-driven decisions with ambiguous feature definitions

Operational Inefficiencies

  • Engineers must manually interpret metric names or review code to determine feature instrumentation
  • Feature categories exist without documented inventories of constituent features
  • The recent audit required teams to spend weeks in a mostly manual process to identify features, map metrics, and assess instrumentation gaps at a feature level
  • Without automation (which is not possible without a consistent feature taxonomy and feature-level metrics attribution) we can only produce static, one-time snapshots like https://instrumentation-audit-054c5e.gitlab.io/ rather than real-time dashboards with feature level insights

Strategic Risks

  • Cannot appropriately measure ROI on individual feature investments
  • Features marketed to customers may not map to how we measure them internally
  • Unable to easily make informed decisions about feature maturity progression (Experimental → Beta → GA)
  • Cannot proactively identify underperforming features or adoption trends without manual intervention

Auditing and Accessibility

  • Manual audits will remain the only option, requiring significant team effort for each assessment
  • Feature usage data will remain inaccessible to non-technical stakeholders who need it for product/management decisions

Proposed Solution

Aside from addressing the problems highlighted above, this would also allow us to transform manual audit processes into an automated system that provides real-time feature usage observability, similar to the static dashboard created as part of the Core DevOps instrumentation audit (here).

Phase 1: Establish Feature Taxonomy

  1. Define Engineering Feature Hierarchy

    Each group identifies and lists all their features in a concise/univocal manner.

  2. Create Central Feature Inventory

    Identified features are added to a central location that adheres to the following requirements:

    • Machine-readable format (YAML/JSON)
    • Version controlled in GitLab repository
    • Kept up to date alongside relevant code changes

    A sample structure/hierarchy for this file can be exemplified as follows:

    Stage (e.g., Create)
    └── Group (e.g., Code Review)
        └── Feature (e.g., Merge Requests)
            └── Action (e.g., Approve MR) [optional]

Phase 2: Enable Feature-Level Instrumentation

  1. Establish Metrics Naming Convention (optional but potentially a useful addition)

    • Define a consistent format for metric names that respects the new feature taxonomy. Example:

      [stage].[group].[feature].[action or other univoke suffix]

    • Update development documentation and enforce naming convention for new metrics

  2. Enable Feature Attribution

    • Add feature and action attributes to metrics definition schema
    • Update metrics catalog to display and allow filtering metrics by these attributes
    • Add validations to prevent metrics without proper attribution

Phase 3: Retroactive Feature Attribution

  1. Map existing metrics to newly defined features
  2. Update relevant dashboards to use new taxonomy

Success Criteria

  • Feature inventories complete for all groups
  • Feature taxonomy documented and approved
  • Central feature inventory operational and integrated with CI/CD
  • New metrics follow naming convention and include feature attribution
  • Metrics catalog displays and filters by feature/action
  • Existing metrics have feature attribution
  • Real-time dashboard in Tableau showing feature-level usage

Next Steps

  1. Gather feedback on this proposal
  2. Create implementation plan if/one approved
Edited by 🤖 GitLab Bot 🤖