Skip to content
Snippets Groups Projects
  1. Mar 29, 2024
  2. Dec 04, 2023
  3. Nov 30, 2023
  4. Nov 15, 2023
  5. Nov 14, 2023
  6. Nov 08, 2023
  7. Nov 07, 2023
  8. Oct 31, 2023
  9. Oct 25, 2023
  10. Oct 12, 2023
  11. Oct 09, 2023
  12. Oct 05, 2023
  13. Sep 28, 2023
  14. Sep 27, 2023
    • Nicolas Dular's avatar
      Add development mode for AI actions · 58c88893
      Nicolas Dular authored and Gosia Ksionek's avatar Gosia Ksionek committed
      Sidekiq can cause issues with code reloading in development. To overcome
      this, it's now possible to set `LLM_DEVELOPMENT_SYNC_EXECUTION=1`
      in development which executes AI actions synchronously.
      58c88893
  15. Sep 05, 2023
  16. Sep 04, 2023
    • Nicolas Dular's avatar
      Track AI feature token usage · e1632fba
      Nicolas Dular authored and Alexandru Croitor's avatar Alexandru Croitor committed
      Adds tracking to AI features with an approximated measurement of our
      token usage for Anthropic and Vertex.
      
      It enables us to group token usage per feature or per user.
      e1632fba
  17. Aug 30, 2023
    • Jan Provaznik's avatar
      Update AI client SLI · 95c65597
      Jan Provaznik authored and Gosia Ksionek's avatar Gosia Ksionek committed
      * with this change success ratio of AI requests is measured outside of
        exponential backoff
      * llm_chat_answers SLI is replaced with more generic llm_completion
        which tracks error ratio of all AI actions
      95c65597
  18. Aug 22, 2023
    • Nicolas Dular's avatar
      Add optional client_subscription_id · 37252ebe
      Nicolas Dular authored and Gosia Ksionek's avatar Gosia Ksionek committed
      This adds an optional client_subscription_id to the ai_completion_response
      subscription. In addition to that it fixes the GraphqlTriggers to be
      able to deal with optional subscription arguments.
      
      This prepares us to allow listening only to a specific client_subscription_id
      on the websocket, and to only broadcast messages based on a user_id. This
      is important for the chat as well as other aiActions.
      
      This has no breaking changes, nor changes how the subscription gets used.
      
      Changelog: changed
      EE: true
      37252ebe
  19. Aug 18, 2023
    • euko's avatar
      Support explain code for blobs · feff925d
      euko authored and Gosia Ksionek's avatar Gosia Ksionek committed
      If the current user is viewing some code blob,
      the user should be able to ask the chat to
      explain the code.
      
      We will inject the blob's code into the zeroshot
      executor's prompt and ask the LLMs to directly
      explain the code when instructed.
      
      To make that possible, we will make use of
      the Referer header to detect if a user is viewing
      a Blob. The referer url will be added as an option
      to be extracted by CompletionWorker.
      
      CompletionWorker will then attempt to resolve
      and authorize the blob pointed to by the referer url.
      If the blob is found and authorized,
      it will be available as a context attribute, 'extra_resource'.
      
      The zeroshot executor can then use the attribute to
      include the code blob and additional prompt.
      
      The change is guarded with a feature flag.
      feff925d
  20. Aug 04, 2023
    • Nicolas Dular's avatar
      Fix storing messages for summarizing reviews · 3bdc03f4
      Nicolas Dular authored
      We fixed this before by setting `skip_cache: true` by default in
      `ExecuteMethodService`. However, `SummarizeSubmittedReviewService` was
      not going through the `ExecuteMethodService`.
      
      As we only want to store messages from the `chat` action, and to fix
      this for the future, the logic is now reversed and it's required to set
      `cache_response: true` explicitly.
      3bdc03f4
  21. Jul 27, 2023
  22. Jul 26, 2023
    • Nicolas Dular's avatar
      Do not store chat messages by default · 3822cfb1
      Nicolas Dular authored
      We no longer want to store and show AI messages on the chat, if not
      explicitly enabled by the feature. It is now only enabled for the `chat`
      AI action.
      We do this by setting `skip_cache = true` by default.
      
      It also fixes a bug where the `skip_cache` was not passed along properly
      to the GraphqlSubscriptionResponseService.
      
      Changelog: fixed
      EE: true
      3822cfb1
  23. Jul 13, 2023
  24. Jun 26, 2023
    • Nicolas Dular's avatar
      Do not broadcast AI responses twice · 425f1589
      Nicolas Dular authored and 🤖 GitLab Bot 🤖's avatar 🤖 GitLab Bot 🤖 committed
      When the Agent picks the `SummarizeComments` tool, we are internally
      calling the `GenerateSummaryService`, which also stores the response to
      the cache and broadcasts it via the GraphQL subscription.
      
      With `skip_cache: true` we were already able to not store the response
      to the cache. This change renames the `skip_cache` option to
      `internal_request` and also no longer broadcasts the response to the
      client, which resulted in duplicated responses in the chat.
      425f1589
  25. Jun 20, 2023
    • Nicolas Dular's avatar
      Do not broadcast AI responses twice · d29b9c8b
      Nicolas Dular authored
      When the Agent picks the `SummarizeComments` tool, we are internally
      calling the `GenerateSummaryService`, which also stores the response to
      the cache and broadcasts it via the GraphQL subscription.
      
      With `skip_cache: true` we were already able to not store the response
      to the cache. This change renames the `skip_cache` option to
      `internal_request` and also no longer broadcasts the response to the
      client, which resulted in duplicated responses in the chat.
      d29b9c8b
  26. Jun 13, 2023
    • Jan Provaznik's avatar
      Add skip_cache option · a52e9d59
      Jan Provaznik authored
      It's possible that some AI chain tools use completion service (e.g.
      SummaryComments tool), in this case we want to avoid storing
      request/response in cache because it's only intermediate step.
      a52e9d59
  27. May 31, 2023
  28. May 23, 2023
  29. May 16, 2023
  30. May 09, 2023
  31. May 04, 2023
  32. Apr 13, 2023
    • Patrick Bajao's avatar
      Check read permissions in Llm::CompletionWorker · 590362e4
      Patrick Bajao authored
      Before we make a call to AI API, we need to check if user who
      actually executed the action can read the resource and if the
      resource can actually be sent to AI (utilize `#send_to_ai?`).
      
      We already have the permission check at the mutation level and
      `#send_to_ai?` check in `Llm::BaseService`. But it is possible
      that those permissions changes after the job is enqueued.
      
      No changelog since this is still behind a feature flag.
      590362e4
  33. Apr 12, 2023
Loading