@@ -71,39 +71,37 @@ Create a `.gitlab/duo/mr-review-instructions.yaml` file in your repository. Duo
```yaml
instructions:
-name:Ruby conventions
fileFilters:
-"**/*.rb"
-"!spec/**/*.rb"
instructions:|
1. All public methods must have Sorbet type signatures
2. Prefer keyword arguments for methods with 3 or more parameters
3. Do not use `rescue Exception` — rescue specific error classes only
-name:Test quality
fileFilters:
-"spec/**/*.rb"
instructions:|
1. Every new spec must have a description that reads as a sentence
2. Avoid `allow_any_instance_of` — use proper doubles or dependency injection
3. Flag any test that makes real network calls without stubbing
-name:CI configuration
1. Every `it` block description must read as a complete sentence that
describes behaviour, not implementation (e.g. "returns an empty array
when the user has no projects", not "empty array")
2. Shared examples should be named so their inclusion reads naturally
(e.g. `it_behaves_like "a paginated endpoint"`)
3. Flag tests that assert on too many things at once — suggest splitting
into focused examples
-name:Code review clarity
fileFilters:
-".gitlab-ci.yml"
-".gitlab/ci/**/*.yml"
-"**/*.rb"
instructions:|
1. New jobs must define a `stage` explicitly
2. Do not hardcode environment-specific values — use CI/CD variables
1. Highlight where intent is unclear and suggest a rename or a comment
to explain the why
2. If error handling swallows exceptions silently, flag it and ask
whether the failure should be surfaced
```
**What to codify:**
**What to codify — and what not to:**
Reserve MR review instructions for things that require judgement or context to evaluate:
Good candidates for MR review instructions are standards that are:
- Readability and intent ("does this description read as a sentence?")
- Architectural fit ("does this belong in a service object or the model?")
- Patterns that are hard to express as a lint rule
- Frequently raised in human reviews but easy to miss
- Project-specific (not covered by linters or existing tooling)
- Clear enough to be checked mechanically (avoid vague rules like "write clean code")
Avoid duplicating checks already covered by static analysis. If RuboCop, or Danger can catch it deterministically, let them — it's cheaper, faster, and raised directly in the IDE. Overlapping with linters adds noise without value.
@@ -84,22 +84,26 @@ Create the file in your repository:
```yaml
instructions:
-name:My team standards
-name:Test descriptions
fileFilters:
-"**/*.rb"
-"spec/**/*.rb"
instructions:|
1. Public methods must have Sorbet type signatures
2. Avoid `rescue Exception` — rescue specific error classes
3. Do not leave debugging output (puts, pp, binding.pry) in production code
1. Every `it` block description must read as a complete sentence
describing behaviour, not implementation
2. Flag tests that assert on too many things — suggest splitting them
```
Duo will include these checks whenever it reviews an MR touching matching files, and will comment with a reference to the rule name when it finds a violation.
**Good rules to start with:**
- Patterns your team raises in almost every review ("don't forget the type sig")
- Things linters don't catch (semantic or architectural conventions)
- Common mistakes specific to your codebase
- Things that require judgement to evaluate — readability, clarity, intent
- Conventions that are too contextual to express as a lint rule
- Patterns your team raises repeatedly in human reviews
**What to avoid:**
Lean towards using static analysis by default when possible, even if using AI to generate static checkers, as it's cheaper, deterministic and surfaced in the IDE before the MR is even opened.
See the MR review instructions section of the [main AI page](../) for a fuller example and the complete YAML reference.