Feature Request: Automate ability for Duo MR reviews to learn from user feedback in merge requests
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Proposal
A Duo MR review sometimes makes erroneous suggestions, e.g. suggesting a string comparison needs to be made case-insensitive when in fact the code already makes use of a toLowerCase() function to do just that.
The user can then provide feedback to Duo via a reply in the MR, and Duo will usually admit its mistake and provide a better response.
At present the feedback does not influence subsequent reviews.
It would be helpful if there was a mechanism whereby Duo could learn from the feedback provided and improve the quality of future code review suggestions, both within the same MR and ideally in other MRs belonging to the same project.
One possibility might be to make use of the custom review instructions mechanism to automatically compile a set of additional prompts based on user feedback.
Examples of erroneous suggestions and user feedback can be found in this GitLab Ultimate customer support ticket (ZD internal link).