[Secondary research] Aura - Secrets Detection and Management
# Background
As AI developer tooling expands, and the design team at GitLab builds out future concepts for AI-driven workflows targeting the builder persona, we how security tooling like secrets management and detection might fit into those workflows. This issue is a place to collect secondary research and concepts to refer to when creating design concepts for Aura Secrets Detection and Management.
# Primary research
<table>
<tr>
<th>Source</th>
<th>Key Takeaways</th>
</tr>
<tr>
<td>
[GitLab DAP Hackathon Survey Results](https://docs.google.com/document/d/1UB5m6-vIxGjCoK15V3aNLuUgkt23QfwNAqzKJuwgqTw/edit?tab=t.iq8jut8ka955#heading=h.q0rihuc82k42)
</td>
<td>
* What drives purchase decisions: **Security and governance** - especially important for enterprise/financial clients (_Note: 60% of participants said security, even though only 3% of them were security engineers! Almost 80% were engineering roles so this is coming mostly from our Builder persona.)_
* Also mentioned: **Competitive pricing** (value relative to cost, time, and effort), **ease of use** (low friction to get started and maintain), **customization freedom** (ability to adapt tools to specific needs)
* Users want the tool to be **proactive** but **non-intrusive**; communicating a need for **contextual intelligence**:
* **When something is wrong or could be better** → be proactive (flag errors, suggest optimizations, catch bottlenecks, alert on security practices)
* **Engineers want GitLab to be "the senior engineer"**—competent, transparent, proactive when something is wrong, invisible when things are running smoothly.
* **Human oversight is non-negotiable.** All interviewed participants insisted on approval gates for AI actions, especially for security-sensitive operations.
* Aspects that drive repeated use vs abandonment: **Security and destructive behavior are trust-breakers** - participants will abandon tools that pose risks.
* **Trust is built on reputation (66%) and data transparency (63%)**; broken by harmful changes (66%) and data access violations (59%). When things go wrong, 61% expect shared accountability across the builder, approver, runner, and team.
</td>
</tr>
</table>
# Secondary research
<table>
<tr>
<th>Source</th>
<th>Key takeaways</th>
</tr>
<tr>
<td>
[Accelerating the Adoption of Software and AI Agent Identity and Authorization (NIST)](https://www.nccoe.nist.gov/sites/default/files/2026-02/accelerating-the-adoption-of-software-and-ai-agent-identity-and-authorization-concept-paper.pdf)
</td>
<td>
* Proposes that AI agents should be treated as identifiable entities within enterprise identity systems rather than as anonymous automation running under shared credentials
* Open questions:
* Should authorization policies for AI agents be able to adapt in real time as an agent’s operational context changes?
* How to establish least privilege for an agent, especially when its required actions might not be fully predictable when deployed?
* How authorization policies can be dynamically updated when an agent's context changes — for example, if an agent gains access to new tools and the sensitivity of aggregated data changes?
* How to handle "on behalf of" delegation and binding agent identity to human identity for human-in-the-loop scenarios?
</td>
</tr>
<tr>
<td>
[Establishing Workload Identity for Zero-trust CI/CD: From Secrets to SPIFFE-Based Authentication](https://arxiv.org/pdf/2504.14760)
</td>
<td></td>
</tr>
</table>
issue