Update fake models to return different responses
What does this MR do and why?
This MR replaces the hard-coded response with one included the original input prompt.
- Before:
fake code suggestion from PaLM Text
- After:
Fake response for: <input prompt>
The requirements are as mentioned in #404 (closed)
Closes #404 (closed)
How to set up and validate locally
- Check out to this merge request's branch.
- Ensure
AIGW_USE_FAKE_MODELS=false
in.env
file - Run the AI Gateway.
poetry run ai_gateway
- Send a cURL request to the service.
curl --request POST \ --url http://ai-gateway.gdk.test:5052/v2/code/generations \ --header 'Content-Type: application/json' \ --header 'X-Gitlab-Authentication-Type: oidc' \ --data '{ "current_file": { "file_name": "hello.go", "content_above_cursor": "", "content_below_cursor": "" }, "prompt_version": 2, "prompt": "// Generate a function to print hello world\nfunc print", "model_provider": "anthropic", "model_name": "claude-2.1" }'
- Confirm the response is correct.
{ "id": "id", "model": { "engine": "fake-palm-engine", "name": "fake-palm-model", "lang": "go" }, "experiments": [], "object": "text_completion", "created": 1710414858, "choices": [ { "text": "Fake response for: // Generate a function to print hello world\nfunc print", "index": 0, "finish_reason": "length" } ] }
Edited by Tan Le