Follow-up from "Use latest generated data for MR"
The following discussions from !13 (merged) should be addressed:
-
@allison.browne started a discussion: similar to the other suggestion, if we follow similar conventions to the backend of the gitlab app it could be:
execute(languageId) {
-
@allison.browne started a discussion: I think this would be considered a Finder in our backend rails code.
class PromptsFinder {
-
@allison.browne started a discussion: I wonder if we should send the language model and prompt type in there own feilds too?
-
@allison.browne started a discussion: suggestion: Should we write the enriched data to a file on the first run?
That way we could cache it without needing to do the work of enriching it upon each request?
Or is this lightweight enough to just wait on it each time?
follow-up idea: We could eventually evolve this transformation to happen in a script at the end of the
eval-code-completion
work?
I think the, way you are planning to implement it is that we will only make one request to the endpoint per user/language, and then the front end will be responsible for cycling through each prompt from the list?