fix: code suggestions uses v3 endpoint for code generations requests
What does this merge request do and why?
This MR addresses gitlab-org/ai-powered/eli5#67 (closed)
Eli5 currently encounters a validation error when making code generation
requests to the /v2/code/generations
endpoint in AIGW
during model evaluations. This occurs because the endpoint requires a model_endpoint
parameter.
This issue aims to resolve this problem by updating Eli5 to use the /v3/code/completions
endpoint instead for code generations
requests, which doesn't require the model_endpoint
parameter. This change also aligns Eli5's code suggestion functionality with production usage patterns and allows us to evaluate and compare model performances for our datasets.
Related to: gitlab-org/gitlab#508167 (closed)
How to set up and validate locally
Confirm code generation
requests work with the new endpoint and a model provider, when no provider is given, it defaults to vertex-ai
.
poetry run eli5 code-suggestions evaluate \
--dataset="dataset.code_suggestions_generations_squadri.6" \
--source=ai_gateway \
--limit=1 \
--experiment-prefix=shola-claude-3-codegen-exp \
--intent=generation \
--model-provider="anthropic" \
--evaluate-with-llm
Confirm code completions
requests function as before:
poetry run eli5 code-suggestions evaluate \
--dataset="code-suggestions-input-testcases-v1" \
--source=ai_gateway \
--evaluate-with-llm \
--limit=1 \
--experiment-prefix=experiment-v3-endpoint
Numbered steps to set up and validate the change are strongly suggested.
Merge request checklist
-
I've ran the affected pipeline(s) to validate that nothing is broken. -
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed.