Validation Dataset for /refactor

This issue is to capture work for the Custom Models team to contribute to validation dataset creation for /refactor.

While these /refactor is executed within Chat, the underlying functionality (and thereby dataset creation) is owned by Code Creation. As such, Custom models would be collaborating with Code Creation on these datasets.

Background

The current prompt for /refactor include the following context between the user and the system prompt:

Proposal

Custom Models will collaborate with Code Creation to help create a validation dataset for /refactor. There are several potential sources from which we can draw data for inclusion in a /refactor datasets. The strong preference is to use historical Gitlab user data:

  1. draw from historical data and Chat bash data - spreadsheet. Chat bash datasets currently include 17 examples of refactor requests to GitLab Duo Chat, found in the Refactor tab.
  2. fetch commits from gitlab-org/gitlab that are labeled refactor
  3. adapt a open source public dataset like CodeEditorBench
  4. Generate examples by 'unrefactoring' code
Iteration I

The first iteration will adapt the OS datasets CodeEditorBench, which includes the below schema. To this dataset we will add in the 17 examples of refactor requests to GL Duo Chat from the Refactor tab in the Chat bash data - spreadsheet.

idx

int64

title

string

code_language

string

incorrect_solutions

string

solutions

string

type

string

difficulty

string

public_tests_input

string

public_tests_output

string

private_tests_input

sequence

private_tests_output

sequence

Definition of Done

A first iteration of a validation dataset for /refactor has been completed with at least 70 to 120 prompts in accordance with Playbook recommendations.

Edited by Susie Bitters