- Check any obvious related performance epics (for the affected area).
1.**Check infrastructure / Dedicated trackers**:
- Look for GitLab.com / Dedicated incidents with similar:
- Symptoms (timeouts, high DB load, lock contention)
- Error messages or query patterns
- Workers / features
1.**Consult owning development group if needed**:
- Use the RFC process to avoid investigating in isolation.
- When you see a likely match but are unsure, @-mention the relevant group on an existing issue and briefly summarize customer evidence.
### 4. Decide: existing issue vs new issue
#### 4.1 When you find a good match
If there is an existing GitLab issue that matches the customer symptoms:
-**Link the ticket** to the issue:
- Add a short comment in the GitLab issue with:
- Deployment type, GitLab version
- Instance size / notable configuration
- High-level impact (for example, "Ultimate, ~10k active users, many MRs per day")
- Ensure customer-related labels are present:
-`customer` and relevant deployment-type/performance labels where appropriate
- Note **any mitigations** tried or known (from .com / Dedicated or docs) and whether they helped.
#### 4.2 When you **don't** find a match
1. If similar patterns **are** present on .com or Dedicated:
-**Create a new issue** in `gitlab-org/gitlab` (or appropriate project) that:
- Summarizes the problem
- Includes evidence from both:
- The customer environment, and
- GitLab.com / Dedicated logs
- Add labels such as:
-`customer`, `performance`, `infradev`, and deployment-type labels
- Tag the owning group and link to any related epics.
- Link the customer ticket to this new issue.
1. If **no similar patterns** show up on .com / Dedicated:
- Treat it as **potentially self-managed-specific**:
- Configuration, scaling, or environment-specific behavior
- If the issue is **significant and reproducible**, still **open a GitLab issue** with:
- Clear reproduction notes
- Customer impact
- Any hypotheses (for example, schema, index, configuration, or workload characteristics)
### 5. Drive the database investigation
While the cross-reference work proceeds, continue driving the technical investigation:
- Use existing **Database Help / DB Support Pod** workflows for:
- Query analysis and slow-query identification
- Index / schema review and background migration checks
- Lock / blocking analysis and connection saturation
- Loop in **Database Engineering / DBO** when:
- The issue appears systemic or risky to change
- You need deeper guidance on schema, partitioning, or background migrations
Document your findings in the ticket and, when relevant, in the linked GitLab issue.
### 6. Close the feedback loop
To make the work re-usable across deployments:
-**As new information appears** (from customer or Infra):
- Add it to the linked GitLab issue as a comment.
- When a **fix or mitigation** is merged:
- Record:
- Version(s) containing the fix
- Any backports or feature flags required
- Help the customer apply and verify the fix.
- After validation:
- Comment on the GitLab issue with:
- Customer confirmation and any metrics before/after
- Whether the same fix should be proactively considered for other deployment types.
---
## Related
-[Database Support Pod documentation](../../support/support-pods/database/)
---
## Reference
-[RFC: Process for Cross-Referencing Self-Managed Performance Issues with GitLab.com and GitLab Dedicated](https://gitlab.com/gitlab-com/support/support-team-meta/-/issues/7411)