Wrap code-bison requests with the prompt template
Code-bison model often returns markdown in its generations. We can reuse this model behavior and wrap the user requests with a prompt template to simplify the cleaning process and get better-structured generations.
Prompt template example
Given a relevant context and request, complete the code exercise. The request may contain natural language prompts or code samples. Rely on the relevant context as much as possible when completing the code exercise. Do not add an explanation of the completed code exercise. Wrap code exercise with markdown.
Here is the relevant context:
# class Book:
# def __init__(self, title, author, year):
# self.t = title
# self.author = author
# self.yea = year
#
# @property
# def title(self):
# return self.t
Here is the request:
# implement a function to find the min value in the array of books
# Return index of the found book
def find_min(arr):
Here is the correct implementation of the code exercise:
Model output
# implement a function to find the min value in the array of books
# Return index of the found book
def find_min(arr):
min_value = arr[0]
min_index = 0
for i in range(1, len(arr)):
if arr[i] < min_value:
min_value = arr[i]
min_index = i
return min_index
Steps:
-
Add prompt template -
Extract model generation by markdown tags
UPD #0
Another template used in !342 (merged)
Template
Provide the implementation for the prompt task. Each task starts with the "GitLab Duo Generate:" preamble.
Wrap the code with ``` specifying the lang used: {lang}
.
Do not add "GitLab Duo Generate" to the implementation
Here is the prompt:
{prefix}