Spike: Investigate how AI could improve the usability of bot comment with policy violation
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Time-box: 5 days
Why are we doing this work
In the scope of this Spike, we would like to know if using AI could be helpful to provide better and more useful information for the user related to violated policies. Information like:
- Give a summary of why it has been violated?
- Give suggestions on how to fix vulnerabilities
- Suggest who to add as a reviewer, such as one button, add reviewers etc.
or
- Situation 1: if the vulnerability belongs to the new code pushed to the MR, directly link the user to the code and give a suggested fix
- Situation 2: if the vulnerability doesn't belong to this MR, suggest a reviewer and indicate the vulnerability was introduced by another MR; Give suggest solution: dismiss vuln, fix now, fix later&approve this MR
As an expected result of this Spike, we would like to get the following:
- MR prepared for review with changes behind feature flag to prepare a bot comment after policy is violated with enhanced information,
- prepared Prompt that could be reused and further developed,
Edited by 🤖 GitLab Bot 🤖