Skip to content

MVC Design: [AI Proposal] Explain this Vulnerability

User problem

What user problem will this solve?

GitLab surfaces vulnerabilities that contain relevant information, however, more often users aren't sure where to start. It takes time to research and synthesize information that is surfaced within the vulnerability record. Moreover it can be difficult to figure out how to fix a given vulnerability.

Solution hypothesis

Why do you believe this AI solution is a good way to solve this problem?

Users are looking to quickly understand a vulnerability so that they know what next steps to take, i.e. what code change do I need to make etc.

Assumption

What assumptions are you making about this problem and the solution?

  • The amount of information for a vulnerability can be under/overwhelming.
  • It is difficult to know where to start.
  • Not all fixes are straightforward.

Personas

What personas have this problem, who is the intended user?

  • Sasha (Software Developer) can use this feature to better understand and potentially fix vulnerability findings before she tries to merge to the default branch.
  • Sam (Security Analyst) uses this feature to quickly triage vulnerabilities and learn about specific vulnerabilities quickly.

Proposal

See design section below.

Note: All members of the team, myself included, are actively monitoring and aligning with the ongoing overlapping efforts in other stages that are using similar components related to AI functionality. The designs posted here are subject to change as these UX/ UI conversations evolve.

Questions

  • Is this an interactive chat feature within GitLab now or in the future?
  • How should/ could we allow users to give feedback about the results? 👍/ 👎, free text input, link to a feedback issue or Qualtrics survey?
    • Note: this is not required for the MVC. Ideally, the 👍/ 👎, free text input is in a format that we can pull into Sisense.
  • What are all the possible actions that users may be able to take from the results? Copy code, implement solutions via MR, create new MR, ping a teammate to discuss the proposal, create an issue that pulls in the info from the OpenAI result...?
    • Let's start with the ability to copy the code block. (Also not required for the MVC).
    • Another iteration will allow users to create a new MR>
  • Are there legal concerns with users implementing suggested changes and holding GitLab accountable if these suggestions don't, in fact, prevent a security attack? Do we need to promote or include info on OpenAI in the UI or in our docs? Do users have to sign or agree to a terms and conditions first?
    • This is a good question for @tmccaslin and legal. I will followup in the #ai_vulnerability_explanation Slack channel.

MVC Requirements

  • Info alert in SAST vulnerability pages (Option 2)
  • Drawer opens with findings (information does not persist when drawer is closed, but there's no limit to how many times it can be accessed)

Future iterations

  • CTA(s) in drawer - either create issue and move info into issue description automatically, create MR, or download result? TBD
  • Copy code snippets
  • Feedback link/ buttons at bottom (👍/ 👎 or Helpful and Not helpful or Wrong)
  • Interactive chat functionality
  • Include in security findings (Merge Request and security tab in pipeline)
  • On vulnerability report, indicate which findings have AI results (e.g. icon in activity column and in activity filter)
  • IF we decide to limit number of times user can click button, confirmation modal w/ CTA if user tries to close

Concerns

Unfortunately, we don't yet have the ability to ping anyone in the vulnerability object, nor do we have the ability to leave a comment without first changing the status. This will make collaboration and discussion about the AI-assisted feedback/ proposals a little bit challenging, with the workaround being that the user creates an issue from the vuln (or uses an external tool, e.g. Slack, and sends the link to the vuln). [Note: These problems will be addressed in DESIGN: Vulnerability comment enhancements.]

Edited by Becka Lippert