Analytics Instrumentation Planning and PriorItisation FY25
Problem
We are currently navigating through multiple initiatives which can be challenging when it comes to enhancing our focus and being efficient. The goal of this issue is to outline the key high-level projects under Analytics Instrumentation teams purview and add prioritisation scores to bring greater clarity to the sequence in which we intend to tackle these projects.
Projects and Prioritisation
RICE framework scoring details here
Project | Reach | Impact | Confidence | Effort | Score | Notes |
---|---|---|---|---|---|---|
Cover product usage data tracking beyond the Gitlab instance |
10 | 3 | 0.8 | 3 | 8 |
ReasoningMarking reach and impact as high given the current focus on AI and potential growth in cloud connector features. The requirement is quite clear but I'm not sure we know what the specific solution should be so marking confidence as Medium. Effort will mostly revolve around communication with data team. |
10 | 3 | 0.8 | 4 | 6 | ||
Track feature instrumentation coverage and build solutions to improve it |
10 | 3 | 0.5 | 3 | 5 |
ReasoningGiven the low instrumentation rates we are currently seeing, I'm marking reach and impact on the higher end. Confidence is low since we are not fully clear on what is stopping teams from instrumenting and what the specific solution should be. Effort is based on the assumption that we first need to link metrics/events to features and then build automations around it. |
10 | 2.5 | 0.5 | 4 | 3.125 |
ReasoningReach score of 10 because it impacts all our internal users. Impact score between Medium and massive(x2.5) because potential impact on better data-based decisions with GitLab. Confidence is low (50%) because the problems and solutions will become more apparent once we get more teams to attempt instrumenting features. Effort based on the assumption that it will mostly be many smaller individual changes. |
|
1.5 | 2 | 1 | 1 | 3 |
ReasoningMarking reach as small since I think we cover most significant use cases covered by the previous method. Impact is low since teams can always go back to using the old method and it's not a blocker. Confidence is high since we know what the specific use cases are. Effort is comparatively low if we assume we just add the functionality in the same way as before. |
|
3 | 3 | 0.5 | 2 | 2.25 |
ReasoningMarking reach and impact higher since these are critical metrics that we eventually want to surface to external customers. Marking confidence low since we are not sure of the solution. There are only 24 event-based customer health score metrics left that are not internal events. Database-based metrics can't be easily migrated. |
|
Process in place to ensure broken/duplicate metrics are regularly cleaned from the system |
3 | 3 | 0.5 | 6 | 0.75 |
ReasoningData quality impacts all users and is critical to help build trust in the data. Effort based on first creating more advanced detection mechanisms and then automated quarantining or similar. |
Completely remove outdated redisHLL instrumentation from codebase | 3 | 2 | 0.8 | 12 | 0.4 |
ReasoningMarking reach as significant (\\\\\\\~25% to \\\\\\\~50%) since I'm not sure what % of our users tend to pick an older instrumentation vs internal events by default. Impact is high because it can be confusing and the impact on cells (High = 2x). \\\\\\\~1200 redis_hll metrics are left. Ideally, this would be an effort that we'd share with the groups owning the metrics, which would reduce effort for us. Potentially necessary to do if we do not want to build redis_hll workarounds for cells. |
3 | 2 | 0.5 | 10 | 0.3 |
ReasoningReach is significant since interviews with CRM team show a desire among SM enterprise customers to access more granular usage data to understand feature-level adoption. Medium impact since visibility into more granular feature adoption has the potential to drive revenue. Confidence low since we don't know what % of our customers show interest in this data. High effort since this is a complete change to the event collection architecture on self-managed. |
|
3 | 2 | 0.5 | 12 | 0.25 |
ReasoningReach is significant since we see some demand from customers for this data. Impact is high since customers use this to track and encourage usage, potentially increasing revenue. Effort is assuming full access to collected usage data which would span across collected events and database-based metrics. |
|
Completely remove outdated Snowplow options from codebase | 1.5 | 1 | 0.5 | 8 | 0.09375 |
ReasoningMarking reach and impact lower since it seems less used than redis_hll. Dependent on covering all use-cases with InternalEvents. Snowplow tracking previously did not require event definitions so we do not have an easy view on how widespread it is but it seems less used than redis_hll. |
*Note: Product analytics initiatives is not on this list because of our ongoing commitment to support PA
Desired Outcome
-
✅ Prioritised list of projects based on RICE framework - Have issue/epic placeholders for sub projects
-
✅ List our goals 1 yr and mid-term and document this in our handbook
Edited by Tanuja Jayarama Raju