feat: allow CODE_GEMMA for generations
What does this merge request do and why?
Enable CodeGemma models to be used for code generations.
Ref. gitlab-org/gitlab#461468 (closed).
How to set up and validate locally
- Make a request to the generations endpoint specifying the codegemma model.
curl -X 'POST' \
'http://localhost:5052/v2/code/generations' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"current_file": {
"file_name": "app.py",
"language_identifier": "python",
"content_above_cursor": "def hello_world():",
"content_below_cursor": ""
},
"model_provider": "litellm",
"model_endpoint": "http://127.0.0.1:11434/v1",
"model_name": "codegemma",
"stream": false,
"choices_count": 1,
"context": [],
"prompt_version": 3,
"prompt": [{"role": "user", "content": "help me"}]
}'
{"id":"id","model":{"engine":"litellm","name":"openai/codegemma","lang":"python"},"experiments":[],"object":"text_completion","created":1718140586,"choices":[{"text":"\nI am here to assist you. How can I help you today? Please provide me with more context or a specific question so I can provide the best response.<end_of_turn>","index":0,"finish_reason":"length"}]}⏎
Merge request checklist
-
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed.