Assess failing links in docs-i18n-lint links jobs
Overview
Assess what the manual docs-i18n-lint links jobs are reporting for projects with high translation coverage, then use that data to decide whether to move these jobs from manual/allow-failure to automatic enforcement.
Approach
Create minimal test MRs in each production fork to trigger the manual docs-i18n-lint links job and collect link validation reports:
- 100% translated projects: Runner, Operator, Omnibus, Charts
- High coverage project: GitLab (close to 80%)
For each project:
- Create a test MR with a small, inconsequential change (e.g., whitespace, comment)
- Manually trigger the
docs-i18n-lint linksjob in the MR pipeline - Document what links are failing (if any)
- Analyze patterns and severity
Data to Collect
- Which links are failing and why (broken external links, broken anchors, network issues, etc.)
- How many failures per project
- Whether failures are consistent or intermittent
- Severity assessment (critical vs. minor)
Decision Point
Based on the CI reports from all projects, determine:
- Are the jobs reporting actionable issues?
- Are failures blocking or informational?
- Should we move to automatic enforcement or adjust job configuration?
- Do we need to fix broken links before enabling enforcement?
Acceptance Criteria
- Test MRs created in Runner, Operator, Omnibus, Charts, and GitLab forks
-
Manual
docs-i18n-lint linksjobs triggered and reports collected - Failing links documented and categorized
- Decision made on next steps (enforcement, remediation, or configuration changes)
Edited by Lauren Barker