Relationally Intelligent: Making Duo understands your projects, team practices, your personal preferences
Problem
This issue looks into two distinct problems and finds they share the same root cause
Problem 1: Preferences and rules
Users struggle to make Duo's outputs consistently align with their personal preferences and their team's established practices, forcing them to repeatedly correct or modify Duo's responses to match their needs.
Problem 2: Setting context and giving instructions
Users lack efficient ways to provide Duo with consistent project context and instructions for related or identical tasks, resulting in repetitive effort spent re-explaining the same requirements across different conversations/interactions.
Shared root cause: Duo does not build relationships
Humans naturally work by building relationships and context over multiple interactions/conversations. In contrast, Duo (and other AI assistants) have been designed to treat each interaction as isolated, without persistent context. This mismatch creates a barrier to effective collaboration between users and AI assistants.
Think big vision
Duo is a true partner that builds deep, persistent understanding of your project, your organization's practices, your team's practices, and your individual preferences. It applies this understanding whether it's autonomously solving problems, engaging in conversations, reviewing code, or helping you write new code.
This understanding evolves naturally as your practices change - just as Duo learns your initial preferences and practices, it recognizes when they evolve and adapts its behavior accordingly.
When new team members join, Duo smooths their onboarding by sharing established project/org/team practices while learning their individual preferences, helping integrate their ways of working with their team's existing patterns.
Think small: MVC
The first effort that we are going to make is focusing on problem 1 with Custom rules for Duo (&16938)
Evidence
Expand to see how the Problem Statement above was derived
Observations of collective requests around Duo customization
The following table collects customer raised wishes, and needs as well as divers proposals for solutions to address these. It appears they fall into two distinct categories :
| Customizing outputs to follow style guides and other preferences for how to respond | Efficient context setting |
|---|---|
|
|
Similar observations in the market
In the market we observe features and capabilities that fully or partly address these requests. The way that some requests and solutions above have been formulated, suggests that they have been inspired by what is happening in the market.
See details on market observation in this comment.
Problem refinement
| Preferences and rules | Setting context and giving instructions |
|---|---|
| Users struggle to make Duo's outputs consistently align with their personal preferences and their team's established practices, forcing them to repeatedly correct or modify Duo's responses to match their needs. | Users lack efficient ways to provide Duo with consistent project context and instructions for related or identical tasks, resulting in repetitive effort spent re-explaining the same requirements across different conversations/interactions. |
|
Note: this one is more about OUTPUT customization (how Duo responds) ... |
... while this on is more about INPUT standardization (how users communicate their repeated need to Duo) |
|
Note: an alternative framing could be "Users need to repeat their personal preferences and their team's established practices every time they start a new conversation/interaction, which is very inefficient." However, from interactions with users, I sense that they are more likely to just not use our features than do this extra work. The 5 whys for this one would result in the same root cause anyway. |
Note: an alternative framing could be "Users struggle to compose effective prompts because it is hard to write such. This results in poor responses from the AI leading to loss of time as users have to try a lot until they get what they want." For some users and for some use cases this may be true today. However, similar to when Google search was introduced, we can expect that both the users will learn how to write better prompts and also the models will get better at understanding less well written prompts. What GitLab is doing to address this (independent of this issue here):
However, the original framing of the problem (in the first cell in this column) will not go away with time. Context and instructions will always have to be given as long as the relation between users and AI is one of AI assisting the user. This is reflected in Anthropic's golden rule of clear prompting: Show your prompt to a colleague, ideally someone who has minimal context on the task, and ask them to follow the instructions. If they’re confused, Claude will likely be too. Hence, this alternative framing was not further considered here. |
|
5 Whys: |
5 Whys: |
Shared root cause: Duo does not build relationships
Humans naturally work by building relationships and context over multiple interactions/conversations. In contrast, Duo (and other AI assistants) have been designed to treat each interaction as isolated, without persistent context. This mismatch creates a barrier to effective collaboration between users and AI assistants.