VersionDot: Enhanced Service Ping Visualization and Duo Adoption Insights

Summary

As a Customer Success Manager working with self-managed GitLab customers, I need enhanced capabilities in VersionDot to better understand customer adoption patterns, particularly for Ultimate tier features and GitLab Duo. Currently, VersionDot lacks critical functionality for organizing metrics, searching data, and visualizing event-based insights that are essential for measuring customer success and ROI.

Problem Statement

VersionDot currently has several limitations that prevent CSM/A/Es from effectively tracking and communicating customer adoption:

  1. Unorganized Metrics: The large number of Service Ping metrics are displayed without categorization, making it difficult to find relevant data quickly
  2. No Search Functionality: Without full-text search, locating specific metrics or instances requires manual scrolling and filtering
  3. Missing Event Data: Service Ping event data is collected but not displayed in VersionDot, hiding valuable usage patterns
  4. Limited Duo Insights: No visibility into GitLab Duo/DAP feature adoption, retention, or effectiveness metrics
  5. Incomplete Runner Data: Instance-wide runner metrics may not be fully represented

Proposed Solutions

1. Metric Categorization

Goal: Organize Service Ping metrics into logical categories for easier navigation

Implementation Ideas:

  • Group metrics by DevOps stage (Plan, Create, Verify, Secure, etc.)
  • Create category filters: "Security & Compliance", "CI/CD", "Duo Features", "Infrastructure", "Collaboration"
  • Add a hierarchical navigation sidebar
  • Enable collapsible sections for each category
  • Provide a "Favorites" or "Pinned Metrics" feature for frequently accessed data

User Value: CSMs can quickly navigate to relevant metrics during customer calls without searching through hundreds of unorganized data points


Goal: Enable rapid discovery of specific metrics, instances, or data points

Implementation Ideas:

  • Global search bar with autocomplete
  • Search across:
    • Metric names and descriptions
    • Instance hostnames
    • License information
    • Metric values (e.g., find all instances with SAST jobs > 100)
  • Advanced filters: date ranges, license tiers, version numbers
  • Search result highlighting
  • Recent searches history

User Value: Reduce time spent locating specific data from minutes to seconds


3. Service Ping Event Data Visualization

Goal: Surface event-based metrics that are currently collected but not displayed

Context: Service Ping is designed for operational/static data, but event summaries provide critical adoption insights

Implementation Ideas:

  • Create an "Events Summary" dashboard showing:
    • Event counts by category (weekly/monthly aggregations)
    • Trend lines for key events over time
    • Top 10 most-used features by event count
  • Add event data to existing metric views where relevant
  • Provide downloadable event reports (CSV/JSON)
  • Include event metadata: timestamps, user counts, frequency patterns

User Value: Understand how customers use GitLab, not just what they have configured


4. Ultimate Adoption & Duo Feature Tracking

Goal: Provide comprehensive visibility into Ultimate tier and GitLab Duo adoption

Ultimate Adoption Dashboard

Metrics to Include:

  • DevSecOps scorecard components:
    • Security scanner usage (SAST, DAST, Dependency Scanning, Container Scanning)
    • Compliance framework adoption
    • Advanced CI/CD features (parent-child pipelines, DAG pipelines)
  • Advanced planning features (Epics, Roadmaps, Iterations)
  • Portfolio management metrics
  • Geo replication status (if applicable)

Duo/DAP Feature Dashboard

Critical Metrics:

Code Suggestions:

  • Total suggestions offered vs. accepted (acceptance rate)
  • Suggestions per user (adoption breadth)
  • Retention: % of users who continue using after first week/month
  • Language breakdown of suggestions
  • Time saved estimates

Duo Chat:

  • Active users (DAU/WAU/MAU)
  • Questions asked per user
  • Response satisfaction indicators (if available)

Duo Workflow:

  • Workflows created vs. completed
  • Success rate of automated workflows

Code Review:

  • MR reviews suggested by Duo
  • Acceptance rate of Duo-suggested reviews
  • Time-to-review improvements

Other Duo Features:

  • Test generation usage
  • Code explanation requests
  • Vulnerability explanation usage

AI Gateway Integration (Future Enhancement):

  • Enrich VersionDot with real-time usage data from AI Gateway
  • Cross-reference Service Ping data with AI Gateway metrics for complete picture
  • Track token usage, model performance, latency metrics

User Value: Self-managed customers need quantifiable ROI metrics to justify GitLab investment and demonstrate AI adoption to executives


5. Instance-Wide Runner Metrics

Goal: Provide visibility into runner infrastructure and utilization

Metrics to Include (if within Service Ping scope):

  • Total runners (instance, group, project level)
  • Runner types (Docker, Kubernetes, Shell, etc.)
  • Runner utilization rates
  • Job queue times and execution times
  • Runner version distribution

Note: Verify if this data is available in Service Ping; if not, document as out-of-scope


Use Cases

CSM Use Case: Ultimate Adoption Review

"During quarterly business reviews, I need to show my customer their Ultimate feature adoption trends. With categorized metrics and the Ultimate dashboard, I can quickly pull up their DevSecOps score, Duo adoption rate, and advanced CI/CD usage to demonstrate ROI and identify expansion opportunities."

CSM Use Case: Duo Effectiveness Measurement

"A customer asks: 'Is our team actually using Code Suggestions, and is it helping?' With Duo metrics showing 78% acceptance rate, 45 active users, and retention data, I can confidently demonstrate value and recommend expanding Duo seats."

Customer Use Case: Executive Reporting

"Our VP of Engineering needs to justify our GitLab Ultimate investment. Using VersionDot's event summaries and Duo dashboards, we can show concrete usage metrics: 10,000 code suggestions accepted this month, 30% reduction in code review time, and 85% of teams using security scanners."


Success Metrics

  • Metric Discovery Time: Reduce average time to find specific metrics from 5+ minutes to <30 seconds
  • CSM Adoption: 80%+ of CSMs use VersionDot regularly for customer calls (vs. current manual Service Ping parsing)
  • Customer Engagement: 50%+ of Ultimate customers receive quarterly adoption reports using VersionDot data
  • Duo Visibility: 100% of Duo customers have access to adoption/retention metrics

Technical Considerations

  • Ensure all enhancements maintain data privacy and security standards
  • Consider performance impact of event data aggregation on large instances
  • AI Gateway integration may require new API endpoints and authentication
  • Categorization schema should be configurable/extensible for future GitLab features

Priority & Scope

High Priority:

  1. Metric categorization into topics and tier information (free/premium/ultimate)
  2. Full-text search (high-impact productivity gain)
  3. Duo adoption dashboard (critical for Duo GTM strategy)

Medium Priority: 4. Event data visualization (valuable but requires backend work) 5. AI Gateway integration (future enhancement, requires cross-team coordination)

Lower Priority / Validation Needed: 6. Runner metrics (verify Service Ping scope)