The tests generated for me were using TestUnit but the project (gitlab-org/plan) only has tests using RSpec. It should be fairly easy to detect the framework used in the project either from the repository itself or the dependency tree. This could be injected into the prompt and potentially increase the usefulness of the feature.
The tests themselves were OK. They really just tested that no errors were raised during execution of the script but they would be a good starting point if I had no tests written, saving the boilerplate of writing all these by hand.
A really nice iteration might be to inject some context from tests on similar files using embeddings. This way, the generated tests are more likely to match the style of the existing ones.
Maybe just me, but I'm used to the top item in the dropdown menu being the 'View file @ commit' option and I use this regularly. So my first use of this feature was entirely by mistake.
I would recommend not having it as the top item in the dropdown as I think we're likely to see higher engagement than is realistic and to annoy users somewhat by interrupting their flow.
I asked to generate a test for this vue file, I got this result:
import{mount}from'@vue/test-utils'importMyComponentfrom'./MyComponent.vue'describe('MyComponent',()=>{it('renders correctly',()=>{constwrapper=mount(MyComponent)expect(wrapper.html()).toMatchSnapshot()})it('updates the count when the button is clicked',async ()=>{constwrapper=mount(MyComponent)constbutton=wrapper.find('button')awaitbutton.trigger('click')expect(wrapper.vm.count).toBe(1)})it('emits an event when the button is clicked',async ()=>{constwrapper=mount(MyComponent)constbutton=wrapper.find('button')awaitbutton.trigger('click')expect(wrapper.emitted().increment).toBeTruthy()})})
The output test file is too generic and not relevant to the input source file. It's even using ./MyComponent.vue where we're providing the source file already, I assume it should be able to use the file path in the provided input.
The good part is that it figured out that input is vue and gave output relevant to vue in general but not specific to the input file.
Can we make "Generate Test" the last menu option vs the first? My muscle memory has almost clicked on it multiple times because I was looking to click on the first item.
I know this is probably not considered important but maybe it depends on your personality type.
I think there should be a hyphen in AI-generated (the info message is a bit -sounding too, but this is minor). A post-merge TW review of the copy would be great.
I just tried using this for the first time on a demo project, and it seems to have generated testing code completely unrelated to the content in the merge request. The testing code it is generating is relevant to the whole file, but not my changes. Perhaps we should emphasise the LLM to generate tests for the added content in the PR?
I had a similar experience, it generated a few good tests for the first function in the file only. However, the diff was for one function in the middle of the file.
I noticed the Generate test with AI option is also available on test files themselves. That was unexpected for me and also the results of letting the AI write a test for a test weren't that great
Same for file extensions like .pot, .md... Only files that are code should have the Generate test with AI, I think
Syntax highlighting would be great
With the drawer component being only 400px wide, lots of code lines are wrapped, making it hard to read. Maybe giving the individual code blocks a horizontal scroll instead would already help
Agreed, a wider drawer and syntax highlighting would make it easier to read. Seems sensible to include a prop on GlDrawer to set/select width. The "What's new" drawer is overridden to 500px wide, and the "Job assistant" is 560px wide. Would be good to consolidate into a standard 400px and a wide/l/xl variant for consistency.
The layout and tests cases did have some issues in that the proposed tests were "targeting" each prop/injection individually, while some of them work together so testing them individually is not possible
@leetickett-gitlab@zillemarco Do you think that's the place you wanted to see test suggestions? If they had been better, what would you have done with them?
I think it would be beneficial to produce both positive and negative test cases. A customer was very excited about this feature and suggested that would be beneficial to have both generated.
Generate suggested tests always suggests test cases for Java based on Mockito (testing framework) without taking in consideration the current testing framework used