Skip to content

Explain this Vulnerability: Expose the Prompt

Problem to Solve

Large language models are quickly becoming accepted, but users are looking to better understand what prompt is being sent to the LLM API. Users are asking questions like:

  • Is this AI feature secure?
  • Am I sending something that I shouldn't be sending?
  • Do I have a choice what I send if I use this feature?

For GitLab as we build out Explain this Vulnerability through the abstraction layer, it's becoming difficult to validate results without seeing the actual prompt that was sent to AI. We raised this as a topic in the abstraction layer's epic, but may need to find a quicker solution to aid in development and testing as we release this experimental feature.

Proposal

  • A user can see what prompt is being used.
  • The user has the option to use a prompt with or without their code before they use the Explain this Vulnerability feature.
  • The user understands the impact of the response if they chose to use a prompt that doesn't include their code.
Edited by Alana Bellucci