Skip to content

Relationally Intelligent: Making Duo understands your projects, team practices, your personal preferences

💡 This analysis here lead to defining "Relationally Intelligent" as one of the Core Trust Pillars in our strategy to make Duo a trusted collaborator 💡


Problem

This issue looks into two distinct problems and finds they share the same root cause

Problem 1: Preferences and rules

Users struggle to make Duo's outputs consistently align with their personal preferences and their team's established practices, forcing them to repeatedly correct or modify Duo's responses to match their needs.

Problem 2: Setting context and giving instructions

Users lack efficient ways to provide Duo with consistent project context and instructions for related or identical tasks, resulting in repetitive effort spent re-explaining the same requirements across different conversations/interactions.

Shared root cause: Duo does not build relationships

Humans naturally work by building relationships and context over multiple interactions/conversations. In contrast, Duo (and other AI assistants) have been designed to treat each interaction as isolated, without persistent context. This mismatch creates a barrier to effective collaboration between users and AI assistants.

Think big vision

Duo is a true partner that builds deep, persistent understanding of your project, your organization's practices, your team's practices, and your individual preferences. It applies this understanding whether it's autonomously solving problems, engaging in conversations, reviewing code, or helping you write new code.

This understanding evolves naturally as your practices change - just as Duo learns your initial preferences and practices, it recognizes when they evolve and adapts its behavior accordingly.

When new team members join, Duo smooths their onboarding by sharing established project/org/team practices while learning their individual preferences, helping integrate their ways of working with their team's existing patterns.

Think small: MVC

The first effort that we are going to make is focusing on problem 1 with Custom rules for Duo (&16938)

Evidence

Expand to see how the Problem Statement above was derived

Observations of collective requests around Duo customization

The following table collects customer raised wishes, and needs as well as divers proposals for solutions to address these. It appears they fall into two distinct categories :

Customizing outputs to follow style guides and other preferences for how to respond Efficient context setting
  • Feature request: use style guide with Duo: Incorporate internal coding style guides into Duo Chat and Code Suggestions. Use cases: ensure code suggestions follow customer-specific guidelines, diagnose style-related pipeline failures, and support various development environments. See also https://gitlab.com/gitlab-org/gitlab/-/issues/520180
  • Custom rules for Duo (&16938) • Dasha Adushkina
  • Support non-English users in Duo: As the easiest way to support multiple languages in communication with Duo, add a suffix feature.: Add multilingual support to Duo Chat, particularly for non-English users. Use cases: improve comprehension for non-English speakers, reduce time spent on English interpretation, and increase efficiency in design, coding, and testing for international users.
  • Allow for custom prompt instructions in Duo Code Review: Enable users to add custom instructions to the existing code review prompt. Use cases: tailor code reviews to project-specific needs, enforce company-specific coding standards, and provide context for specific file changes or coding patterns.
  • Additional quotes:
    • As a developer, I would like to have code suggestions or answers from the Chat based on internal coding best practices. As a platform Engineer, I would like to define where the internal coding best practices are located.
    • customer is looking for a style guide for Code Suggestions (always use Java 21 for code suggestions
    • Is there a way to add more context (instructions, not files) to Duo Chat in the IDE similar to how you can "Set project instructions" in Claude? I'm basically trying to get Duo Chat to have some basic instructions (e.g. "Reply with a code snippet, do not explain.") without having to repeat them with every prompt. (Claude uses the concept of a project which holds chat which all share the project context.)
    • Using GH co-pilot copilot-instructions.md configuration in a repo, I'm able to generate code in my IDE (VSCode) that follows our local best practices for Terraform folder layout, content, etc. is something like that possible to do with Duo?"
    • It would be great to be able to ... customise the system prompt.
  • Custom Slash Commands: Implement custom slash commands in Duo Chat. Use cases: create project-specific shortcuts, automate repetitive tasks, and enhance workflow efficiency within the GitLab ecosystem.
    • Project-Specific Prompt Library for GitLab Duo: Implement a project-specific prompt library for GitLab Duo. Use cases: quickly access project-relevant context, improve onboarding for new team members, ensure consistency in Duo interactions, and enhance efficiency by reducing repetitive context-setting.
    • [UX] Custom Slash Commands: Design and implement custom slash commands for Duo Chat. Use cases: create and configure project-specific commands, invoke custom responses or functions, validate commands against custom datasets, and enhance user interaction within Duo Chat.
  • Prompt Customization Across Use Cases: Develop a cohesive approach to prompt customization across various AI-powered domains in Duo. Use cases: enable user-specific prompt customization, implement custom instructions at different levels (instance, project, user), and ensure consistency in prompt customization across different Duo features.
  • Prompt Templates for GitLab Duo Chat: Implement reusable prompt templates for Duo Chat. Use cases: save and reuse initial sets of prompts, reduce time spent on repetitive context-setting, improve consistency across chat sessions, and enhance user efficiency when working with Duo Chat.
  • Team members collecting and sharing prompts and how they use and benefit from them so other can copy-past or learn from them.
  • Additional quotes:
    • Internal user: I am thinking of setting up a project inside GitLab to version control and archive prompts, custom project instructions, and styles. is there already a project created somewhere to use?
    • It would be great to be able to configure custom instructions.

Similar observations in the market

In the market we observe features and capabilities that fully or partly address these requests. The way that some requests and solutions above have been formulated, suggests that they have been inspired by what is happening in the market.

See details on market observation in this comment.

Problem refinement

Preferences and rules Setting context and giving instructions
Users struggle to make Duo's outputs consistently align with their personal preferences and their team's established practices, forcing them to repeatedly correct or modify Duo's responses to match their needs. Users lack efficient ways to provide Duo with consistent project context and instructions for related or identical tasks, resulting in repetitive effort spent re-explaining the same requirements across different conversations/interactions.

Note: this one is more about OUTPUT customization (how Duo responds) ...

... while this on is more about INPUT standardization (how users communicate their repeated need to Duo)

Note: an alternative framing could be "Users need to repeat their personal preferences and their team's established practices every time they start a new conversation/interaction, which is very inefficient."

However, from interactions with users, I sense that they are more likely to just not use our features than do this extra work.

The 5 whys for this one would result in the same root cause anyway.

Note: an alternative framing could be "Users struggle to compose effective prompts because it is hard to write such. This results in poor responses from the AI leading to loss of time as users have to try a lot until they get what they want."

For some users and for some use cases this may be true today. However, similar to when Google search was introduced, we can expect that both the users will learn how to write better prompts and also the models will get better at understanding less well written prompts.

What GitLab is doing to address this (independent of this issue here):

  • Run evals with diverse inputs and improve responses where it is possible on the system prompt side.
  • Update to newer and better models as they become available.
  • Provide learning material such as examples, blog posts, and tutorials.
  • Educate users through the interface itself: #509992 (closed)

However, the original framing of the problem (in the first cell in this column) will not go away with time. Context and instructions will always have to be given as long as the relation between users and AI is one of AI assisting the user. This is reflected in Anthropic's golden rule of clear prompting: Show your prompt to a colleague, ideally someone who has minimal context on the task, and ask them to follow the instructions. If they’re confused, Claude will likely be too.

Hence, this alternative framing was not further considered here.

5 Whys:

1. Why do users struggle to make Duo's outputs consistently align with their personal preferences and their team's established practices? Because Duo doesn't maintain any persistent understanding of how users want it to work, requiring them to restate their preferences in each interaction.
2. Why doesn't Duo maintain a persistent understanding? Because the system treats each conversation as isolated rather than part of an ongoing relationship, starting fresh each time without building upon previous conversations.
3. Why are conversations treated as isolated? Because we've modeled AI assistants on a single-conversation paradigm rather than on how humans actually develop working relationships, ignoring the continuous nature of real collaboration.
4. Why did we model it on single conversations? Because we're treating AI conversations as independent exchanges rather than acknowledging that productive work relationships involve building shared context and understanding over time.
5. Why haven't we acknowledged this? Because we haven't fully recognized that productive work is inherently iterative and context-dependent, not just a series of independent conversations. Our current approach inherits patterns from transaction-based computer interfaces rather than relationship-based human collaboration.

5 Whys:

1. Why do users lack efficient ways to provide Duo with consistent project context and instructions? Because they have to manually repeat the project context and instructions in each conversation/interaction with Duo.
2. Why do users have to repeat context manually? Because the system doesn't have a way to maintain project context across conversations, requiring users to restate information that remains constant.
3. Why doesn't the system maintain project context across conversations/interactions? Because we've designed around isolated conversations/interactions rather than ongoing project work, treating each conversation as a fresh start without previous context.
4. Why have we designed around isolated conversations/interactions? Because we're modeling AI interactions as conversations or momentary interactions rather than as participation in ongoing project work, failing to account for how projects evolve and build shared understanding over time.
5. Why are we modeling AI interaction as conversations? Because we're treating AI as a conversation partner rather than a project participant, ignoring that project work involves continuous building of context and understanding across multiple interactions.
6. Why are we treating AI as just a conversation partner? Because we haven't fully recognized that productive work is inherently iterative and context-dependent, not just a series of independent conversations. Our current approach inherits patterns from transaction-based computer interfaces rather than relationship-based project collaboration.

Shared root cause: Duo does not build relationships

Humans naturally work by building relationships and context over multiple interactions/conversations. In contrast, Duo (and other AI assistants) have been designed to treat each interaction as isolated, without persistent context. This mismatch creates a barrier to effective collaboration between users and AI assistants.

Edited by Torsten Linz