Skip to content

"AI Impact" analytics - FAQ

Table of Contents

Where can I find a demo of AI Impact Analytics?

You can view a click-through demo in the AI Impact Analytics product tour.

On which deployment types can AI Impact Analytics be configured?

AI Impact depends on a ClickHouse database, and results in different configuration options for each deployment type:

  1. GitLab.com: AI Impact Analytics is enabled by default.
  2. Self-Managed with ClickHouse: Organizations with an accessible ClickHouse instance can enable the AI Impact by configuring ClickHouse for contribution analytics within their GitLab self-managed instance.
  3. Self-Managed without ClickHouse: If ClickHouse is not available, the AiUsageData API can be used to export raw event data for analysis in an external BI tool.
  4. Dedicated: Short-term, the AiUsageData API is available for data exports. Long-term, AI Impact Analytics will be supported—details here.

What metrics are available in the AI Impact Analytics?

The AI Impact Analytics Metrics Matrix provides a detailed list of AI Impact metrics, including the supported dimensions and planned roadmap items.

What APIs are available for retrieving AI Impact data?

You can retrieve AI impact metrics using the following GraphQL APIs:

  1. AiMetrics (requires ClickHouse)
  2. AiUserMetrics (requires ClickHouse)
  3. AiUsageData (does not require ClickHouse)

Which AI Impact API should I use?

Here is the AI Impact API feature breakdown:

Feature

AiUsageData

AiUserMetrics

AiMetrics

Data Storage PostgreSQL ClickHouse ClickHouse
Purpose

Provide the raw event data and designed for customers without ClickHouse to gain insights into AI usage.

Query pre-aggregated per-user metrics for code suggestions and Duo Chat via the GraphQL API Query pre-aggregated group-level metrics for code suggestions and Duo Chat via the GraphQL API
Use case Import events into BI tool or write your scripts to aggregate the data, acceptance rates, and per-user metrics for code suggestion events. Quickly get a list of Duo users and frequency of usage for Code Suggestions and Chat.
  1. Powers the AI Impact Analytics dashboard
  2. Aggregated and computed metrics are available programmatically via the GraphQL API.
Per-user metrics Yes Yes No
Retention Period 3 months Based on ClickHouse configuration Based on ClickHouse configuration
Events types

Code suggestions - suggestion size, language, user and type (shown, accepted, rejected).

Metrics are limited to: codeSuggestionsShown, codeSuggestionsAccepted, codeSuggestionAcceptanceRate, duoChatUsers, codeSuggestionUsers

Seat Requirement

Gitlab Duo Enterprise

Gitlab Duo Enterprise

GitLab Duo Pro

Minimum version
  1. AiUsageData introduced in 17.5 with with a flag named code_suggestions_usage_events_in_pg.
  2. Self-Managed 17.5 and 17.6 customers need to enable this flag to activate AiUsageData.
  3. In 17.7 a new flag was added - move_ai_tracking_to_instrumentation_layer.
  4. Self-Managed 17.7 and 17.8 customers need to enable also move_ai_tracking_to_instrumentation_layer, in 17.9 this flag will be enabled by default.
  5. For Dedicated customers these flags will be enabled on 17.8 with this upcoming fix.
17.6

Gitlab Duo Enterprise - 17.4

GitLab Duo Pro - 17.6

APIs Reference & Sample query:

You can explore the GraphQL API resources with the interactive GraphQL explorer.

AIUsageData Reference & Sample query
  1. GraphQL Reference: https://docs.gitlab.com/ee/api/graphql/reference/index.html#aiusagedata
  2. Users running this API must have a Duo Enterprise seat assigned.
  3. Query Example for AiUsageData:
query {
  group(fullPath: "gitlab-org") {
    aiUsageData {
      codeSuggestionEvents {
        nodes {
          event
          timestamp
          language
          suggestionSize
          user {
            username
          }
        }
      }
    }
  }
}
  1. Example Output for AiUsageData:

{
  "data": {
    "group": {
      "aiUsageData": {
        "codeSuggestionEvents": {
          "nodes": [
            {
              "event": "CODE_SUGGESTION_SHOWN_IN_IDE",
              "timestamp": "2024-12-22T18:17:25Z",
              "language": null,
              "suggestionSize": null,
              "user": {
                "username": "jasbourne"
              }
            },
            {
              "event": "CODE_SUGGESTION_REJECTED_IN_IDE",
              "timestamp": "2024-12-22T18:13:45Z",
              "language": null,
              "suggestionSize": null,
              "user": {
                "username": "jasbourne"
              }
            },
            {
              "event": "CODE_SUGGESTION_ACCEPTED_IN_IDE",
              "timestamp": "2024-12-22T18:13:44Z",
              "language": null,
              "suggestionSize": null,
              "user": {
                "username": "jasbourne"
              }
            }
          ]
        }
      }
AiUserMetrics Reference & Sample query
  1. Graphql Reference: https://docs.gitlab.com/ee/api/graphql/reference/index.html#aiusermetrics
  2. Users running this API must have a Duo Enterprise seat assigned.
  3. aiUserMetrics sample query:
query {
  group(fullPath:"gitlab-org") {
    aiUserMetrics {
      nodes {
        codeSuggestionsAcceptedCount
        duoChatInteractionsCount
        user {
          username
        }
      }
    }
  }
}
  1. aiUserMetrics Output
{
  "data": {
    "group": {
      "aiUserMetrics": {
        "nodes": [
          {
            "codeSuggestionsAcceptedCount": 10,
            "duoChatInteractionsCount": 22,
            "user": {
              "username": "USER_1"
            }
          },
          {
            "codeSuggestionsAcceptedCount": 12,
            "duoChatInteractionsCount": 30,
            "user": {
              "username": "USER_2"
            }
          }
        ]
      }    
    }
  }
}
AiMetrics reference & Sample query
  1. GraphQl reference https://docs.gitlab.com/ee/api/graphql/reference/index.html#aimetrics
  2. aiMetrics sample query:
query {
  group(fullPath: "gitlab-org") {
    id
    aiMetrics(startDate: "2024-12-01", endDate: "2024-12-31") {
      codeContributorsCount
      codeSuggestionsContributorsCount
      codeSuggestionsShownCount
      codeSuggestionsAcceptedCount
      duoChatContributorsCount
      duoAssignedUsersCount
      duoUsedCount
    }
  }
}
  1. aiMetrics Output:
{
  "data": {
      "group": {
          "id": "gid://gitlab/Group/9970",
          "aiMetrics": {
              "codeContributorsCount": 4000,
              "codeSuggestionsContributorsCount": 5000,
              "codeSuggestionsShownCount": 6123,
              "codeSuggestionsAcceptedCount": 719,
              "duoChatContributorsCount": 544,
              "duoAssignedUsersCount": 2000,
              "duoUsedCount": 746,
          },
      }
  }
}    
How can we export the AI metrics to a CSV file?

AI metrics data can be exported to a CSV file by automating data retrieval through the AI Metrics APIs. This can be done using a script that fetches and formats the data. You can review a sample script and workflow here.

What is the Data flow for AI Impact Analytics?

You can find the data diagram on groupoptimize team page.

How frequently is the data on the AI Impact dashboards updated on gitlab.com ?

Code suggestion usages should update every 5 minutes. Currently, we are using a logic that tracks only 'committed code within the last month' meaning we count code suggestion usage only if the user has pushed code to the project within the current month. This logic is designed to mitigate scaling risks.

Are code suggestion events from the Web IDE currently being tracked?

Currently, we only collect data from code editor extensions. Tracking events from the Web IDE is a work-in-progress. You can find more details here.

How to create scheduled reports?

You can create a custom AI Impact Scheduled Report to compare teams using AI versus those that don’t. Use the AI Impact view of projects and groups to analyze trends and performance.

What's next and why?

https://about.gitlab.com/direction/plan/value_stream_management/#whats-next-and-why

Additional Resources

Internal FAQ

Edited by Lorena Ciutacu