fix(streaming): use generation indicator - avoid VS Code breaking change
Description
VS Code team refactored inline completion and made it impossible to seamlessly replace one inline completion with another. Our code generation streaming relied on this behaviour and would be broken with the next major VS Code release (happening later this week).
This MR replaces streaming with a progress indicator. The progress indicator gives the user feedback that the generation will take longer than completion.
generate-code-suggestion-indicator
We briefly discussed delaying the notification or changing the text of it here: !2420 (comment 2370117487), but we went with the boring solution (no extra text, no notification delay)
Related Issues
Resolves [VS Code]: Code Generation not streaming in VS ... (#1825 - closed)
Resolves [VS Code] Code suggestion streaming causes Undo... (#1824 - closed)
How has this been tested?
Behaviour
- trigger streaming (either with a custom prompt like in the screencast above or with the
GitLab: Duo Tutorial
test code) - See that streaming starts and the moment that the chunks start coming in, the progress bar at the bottom of the notification starts increasing
- use
"gitlab-lsp.trace.server": "verbose",
setting and validate that when you cancel the generation request (e.g. by starting to type) you see a log like this
[Trace - 11:11:48] Sending notification 'cancelStreaming'.
Params: {
"id": "code-suggestion-stream-1"
}
2025-03-03T11:11:48:015 [debug]: Snowplow Telemetry: ce439daf-d7bf-4c3b-b2bb-0f7872349b5f transitioned from suggestion_requested to suggestion_cancelled
Telemetry
Prerequisites
-
Run snowplow micro locally
podman run --name snowplow-micro --rm -e MICRO_IGLU_REGISTRY_URL="https://gitlab-org.gitlab.io/iglu" -p 127.0.0.1:9091:9090 snowplow/snowplow-micro:latest
- Side note: SP micro basic usage https://docs.snowplow.io/docs/testing-debugging/snowplow-micro/basic-usage/
-
Provide the
trackingUrl
for the local Snowplow collector in the VS Code Extensionsettiongs.json
config"gitlab.trackingUrl": "http://localhost:9091",
-
clear the events
curl -s "http://127.0.0.1:9091/micro/reset"
Test cases
Accept stream
- Start code generation
- Let it finish
- Accept it
- Run
curl -s "http://127.0.0.1:9091/micro/good" | jq '[.[].event | {id: .se_label, se_action: .se_action} ] | sort_by(.id)'
- See
[
{
"id": "f80181c7-ec73-4ad6-8ac9-ca937c96617a",
"se_action": "suggestion_accepted"
},
{
"id": "f80181c7-ec73-4ad6-8ac9-ca937c96617a",
"se_action": "suggestion_stream_completed"
},
{
"id": "f80181c7-ec73-4ad6-8ac9-ca937c96617a",
"se_action": "suggestion_shown"
},
{
"id": "f80181c7-ec73-4ad6-8ac9-ca937c96617a",
"se_action": "suggestion_stream_started"
},
{
"id": "f80181c7-ec73-4ad6-8ac9-ca937c96617a",
"se_action": "suggestion_requested"
}
]
Cancel stream
- Start code generation
- Start typing before the generation finishes
- Run
curl -s "http://127.0.0.1:9091/micro/good" | jq '[.[].event | {id: .se_label, se_action: .se_action} ] | sort_by(.id)'
- If you start typing quickly you see
[
{
"id": "8da489f7-b918-4187-a1a8-0da2a0fb0b65",
"se_action": "suggestion_stream_started"
},
{
"id": "8da489f7-b918-4187-a1a8-0da2a0fb0b65",
"se_action": "suggestion_requested"
}
]
the missing cancel event is a pre-existing bug [LS](streaming telemetry): Unexpected transitio... (gitlab-org/editor-extensions/gitlab-lsp#848)
- If you start typing after the chunks start coming in (see the progress bar indicator to know when to type) you see:
[
{
"id": "e6416c0e-36a6-4e80-be70-de5ce2e1be86",
"se_action": "suggestion_rejected"
},
{
"id": "e6416c0e-36a6-4e80-be70-de5ce2e1be86",
"se_action": "suggestion_shown"
},
{
"id": "e6416c0e-36a6-4e80-be70-de5ce2e1be86",
"se_action": "suggestion_stream_started"
},
{
"id": "e6416c0e-36a6-4e80-be70-de5ce2e1be86",
"se_action": "suggestion_requested"
}
]
Marking a suggestion that has not been shown as rejected
is not ideal and is a consequence of the original streaming behaviour where the LS assumes that as soon as it gives us the first chunk, we show something to the user. I created an issue to fix this [VS Code] Telemetry: Ensure VS Code reports `Su... (#1868 - closed)
Reject stream
- Start code generation
- Wait for the generation to finish
- Start typing again
- Run
curl -s "http://127.0.0.1:9091/micro/good" | jq '[.[].event | {id: .se_label, se_action: .se_action} ] | sort_by(.id)'
- See
[
{
"id": "6e2104a2-2b4b-41cd-875c-ad0850a9022e",
"se_action": "suggestion_rejected"
},
{
"id": "6e2104a2-2b4b-41cd-875c-ad0850a9022e",
"se_action": "suggestion_stream_completed"
},
{
"id": "6e2104a2-2b4b-41cd-875c-ad0850a9022e",
"se_action": "suggestion_shown"
},
{
"id": "6e2104a2-2b4b-41cd-875c-ad0850a9022e",
"se_action": "suggestion_stream_started"
},
{
"id": "6e2104a2-2b4b-41cd-875c-ad0850a9022e",
"se_action": "suggestion_requested"
}
]
Note: the suggestion_rejected
event might not show if your typing haven't triggered inline completion (e.g. if intellisense was shown)
Screenshots (if appropriate)
What CHANGELOG entry will this MR create?
-
fix:
Bug fix fixes - a user-facing issue in production - included in changelog -
feature:
New feature - a user-facing change which adds functionality - included in changelog -
BREAKING CHANGE:
(fix or feature that would cause existing functionality to change) - should bump major version, mentioned in the changelog -
None - other non-user-facing changes