Code-Llama 13B for Code Completion
Develop IDE Code Completion Prompts and supporting code for the IDE for Code-Llama 13B. Prompt iteration will start with baseline prompt templates, to see if completion scores can be improved upon or degrade model performance.
|
model |
lang_id |
similarity_score |
| code-llama-13B | c | 0.7848750106 |
| code-llama-13B | ruby | 0.752283282 |
| code-llama-13B | python | 0.7697300115 |
| code-llama-13B | go | 0.797669222 |
| code-llama-13B | php | 0.8012457373 |
| code-llama-13B | typescript | 0.8000617199 |
| code-llama-13B | javascript | 0.7710773739 |
| code-llama-13B | rust | 0.6685552473 |
| code-llama-13B | c_sharp | 0.7985611015 |
| code-llama-13B | java | 0.777805135 |
| code-llama-13B | cpp | 0.8418544928 |
Assumption: Any Custom Models will be hosted by the customer and routed through an instance-specific AI Gateway.
Customers
Early-adopter customers are tracked in https://gitlab.com/groups/gitlab-org/-/epics/13700+
Scope
- Code-Llama 13B prompt: Code Completion
- Code-Llama 13B prompt: Code Generation (pending evaluation results)
Preparation
-
add support for Code-Llama 13B to self-hosted model blueprint -
ensure support for Code-Llama 13B in self-deployed AI Gateway -
document setup for Code-Llama 13B -
document setup for Code-Llama 13B model serving
Development Steps
-
Add Code Llama 13B as a configuration option in Duo Configurations > Models Menu -
Prompt Routing in GitLab Rails -
AI Gateway Endpoint Routing -
Prompt iteration using Centralized Evaluation Framework -
Prompt creation for Code Llama 13B Code Generation -
Prompt creation for Code Gemma Code Completion (pending validation results)
-
Definition of Done
Through prompt iteration, raise Code Completion scores for all languages above .8. Ideal scores for all languages would be above .9 similarity.
Edited by Susie Bitters