UX: AI Impact analytics - Per-user metrics
Problem
TL;DR: "Are my developers using Duo? How are they using it?"
Customers have asked for insight into:
- Which Duo features users are using?
- How frequently are they using them?
- Assigned seats that are not being utilized by the individual to which they are assigned.
Prior problem statement
Since AI features are enabled at the user level, SW leaders want to understand which users are leveraging AI features and whether their performance has changed over time as a result of the AI usage. They also want to track the progress of AI adoption for evaluating the potential of AI usage.
SW Leaders need to understand how users are interacting with AI features. "Are certain individuals struggling to use the features?"
Customers also want the ability to identify light vs heavy users of Duo to efficiently assign licenses.
Based on these Customer feedback 1,2, 3, 4
Summary of questions to answer:
- Which users are leveraging AI features out of the assigned seats? (Why: To assign licenses efficiently)
- Who are the light vs heavy users of Duo Pro? (Why: To assign licenses efficiently)
- Are certain individuals struggling to use the features? (Why: To be able to reach out and learn more)
- Has certain individuals performance changed over time as a result of the AI usage? (Why: measure ROI)
Open question: Should we focus on "Team Usage" or "Individual User" metrics?
A: We should avoid tracking Individual User Metrics. Here’s why
-
Individual Metrics Can Be Misleading:
- Acceptance rates or code output at the individual level do not accurately reflect productivity or impact.
- Developers may accept AI-generated code but then heavily modify it, or reject it while still using its ideas—making raw acceptance rates unreliable.
- Measuring developers by output (e.g., lines of code, commits) does not account for code quality, maintainability, or collaboration.
- Measuring Individuals Can Harm Team Dynamics - individual performance fosters competition over collaboration, which can weaken team performance.
- AI Impact Goal is Business Impact, not isolated productivity
- Software development is about delivering customer value, not just maximizing individual efficiency.
- AI makes some individual activities faster, but true efficiency comes from how teams integrate AI into workflows.
Persona
Dakota (Application Development Director)
Challenge:
Balancing exposing enough detailed data to help customers understand the impact of AI features on their organization while not exposing individual-level metrics that may be misinterpreted or used by management as performance evaluation and/or "developer productivity" metrics.
Proposal
- Which Duo features are being used and how often?
-
🖼️ Option B
-
- Which Assigned seats are not being utilized?
Explorations
Which Duo features are being used and how often?
Option A
Use a line chart to visualize the usage of each feature
-
🖼️ #455860[Option_A.png] - Pros: Can visualize feature adoption overtime
- Cons: Scalability and noise will become an issue when more features are added
Option B
Use a column chart to visualize the usage of each feature
-
🖼️ #455860[Option_B.png] -
Pros: Information is easier to parse, also can support visualizing both
UniqueandReturningusers in same panel - Cons: Not possible to see change overtime (which may not be important, requires validation)
Option C
Feature specific engagement - Code Suggestions Engagement and Effectiveness Overview Add a new panel to the AI Impact analytics dashboard showing a bubble chart visualization
-
🖼️ #455860[Option_C.png] - Pros: Visualizes multiple dimensions in a single visualization
- Cons: Too much information to parse
Visualization explanation:
- X-Axis: Number of Contributions
- Y-Axis: Acceptances rate %
- Bubble Size: Number of Suggestions Promoted
- Bubble Color: Number of chat prompt
- Data source: https://docs.google.com/spreadsheets/d/1yN49kwVV02es1YyhBzPYFqlEouZb0blQQejXWdzyL68/edit?gid=0#gid=0
- eCharts example: https://echarts.apache.org/examples/en/editor.html?c=scatter-logarithmic-regression
Option D
Use a stacked column chart to visualize the usage of each feature
-
🖼️ #455860[Option_D.png] -
Pros: This is the same visualization we use internally, also can support visualizing both
UniqueandReturningusers in same panel - Cons: Stacked column charts can be harder to interpret at a glance
Assigned seats - per user usage (Drill down explorations)
Assigned seats - Option A
Assigned seats - Option B
Assigned seats - Option C
See archived designs
Option 2:
Adding to CS engagement to the Contribution analytics:
| Preview |
|---|
![]() |
| For latest design mockups, use issue: UX: AI Impact analytics - Contribution analytic... (#451927 - closed) |
Iteration path:
- Adding Duo Pro seat is assigned yes/no and Code suggestions accepted %17.0
- Adding pagination to Contribution analytics as suggested here.
- Adding the sparkline / trendlines.
For latest design mockup, see Option D in UX: AI Impact analytics - Contribution analytic... (#451927 - closed)
Design source
Proof point for workflowsolution validation:
Related topic
Frontend: Code suggestions acceptance by language (#454809 - closed) • Rudy Crespo • 18.5
Open questions
- How important is it to show usage overtime for specific months vs. showing overall trend?
- For Assigned seat - user specific usage, what information makes sense to show on the dashboard vs in the fulfillment / Duo settings area?
