2022-05-05: Errors on Staging - repository has no up to date replicas
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @afappiano
- Engineer on-call (EOC): @cmcfarland
Current Status: Incident mitigated
There was a change in Gitaly that generated a hard error for unavailable repositories, this propagated to 500 errors. The change is reverted, we're now investigating how we got to this state.
Summary for CMOC notice / Exec summary:
- Customer Impact: There is no impact to customers.
- Service Impact: Deploys to gitlab.com are currently blocked.
- Impact Duration: 14:44 - 06:27 (1031 minutes)
- Root cause: RootCauseSoftware-Change - gitlab-org/gitaly!4518 (merged) changed the Gitaly behavior to throw an error that is not handled gracefully by Rails.
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-05-05
-
14:44
- @willmeek reports a number of failures on staging -
14:52
- @samihiltunen identifies gitlab-org/gitaly!4518 (merged) as the root cause. -
15:00
- @willmeek declares incident in Slack. -
15:09
- gitlab-org/gitaly!4518 (merged) is being deployed at production at the moment. It was decided to wait for the gitaly job to finish to confirm the failure will be present in gprd https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/jobs/7011862 -
15:39
- Gitaly job has finished and there's no impact so far no gprd https://log.gprd.gitlab.net/goto/eb0893c0-cc84-11ec-b73f-692cc1ae8214. We suspect the failure could affect QA pipelines on staging -
15:39
- @mayra-cabrera requests assistance from the Gitaly Team -
15:41
- @afappiano starts a new dev escalation requesting assistance from Gitaly specifically. -
15:52
- The failure was identified on the QA pipelines https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines/1186849. It'll block upcoming deployments until its solved. -
16:00
- A configuration change was applied to production halting the current gprd deployment https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/pipelines/1186818 -
16:03
- @samihiltunen states the best option is to revert gitlab-org/gitaly!4518 (merged). Since this will take another 8h, we're looking for a different solution. -
16:05
- The configuration change requires staging on QA to be green. This prevents the gprd deployment to continue https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1186724 -
16:13
- No other Gitaly assistance has been provided so far -
16:26
- @mayra-cabrera opened gitlab-org/gitlab!86688 (merged) to revertGITALY_VERSION
and disabled the task to update it again so deploys can continue -
16:54
- @stanhu identified gitlab-org/gitaly!4518 (merged) to be the MR causing (or at least surfacing) this incident -
17:21
- @jcaigitlab opened gitlab-org/gitaly!4531 (merged) to revert the culprit MR -
17:29
- @stanhu identified 3 projects in thegitlab-qa-sandbox-group
group to have the issue. And they have a+deleted
git repo on some of the nodes. -
17:52
- The MR that restores theGITALY_VERSION
is merged gitlab-org/gitlab!86688 (merged) -
17:57
- A coordinated pipeline with gitlab-org/gitlab!86688 (merged) is created https://ops.gitlab.net/gitlab-org/release/tools/-/pipelines/1187026. This needs to be deployed all the way to production -
19:25
- Deploy to gstg-cny starts https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1187131 -
21:30
- QA on gstg-cny fails again https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines/1187260 with the same error https://sentry.gitlab.net/gitlab/staginggitlabcom/issues/3285172/?environment=gstg -
22:07
- @cmcfarland deleted the offending rows from therepositories
table of the Praefect database -
22:14
- QA tests on gstg-cny are executed again -
22:46
- A new QA failure is found gitlab-org/gitlab!86714 (merged) -
22:56
- A new coordinated pipeline is created to include the QA failure. This will delay the gstg-cny and gprd/gstg deployments even more. ETA to gstg-cny 2h from now, ETA to gprd 6h from now.
2022-05-06
-
00:17
- The deployment to gstg-cny starts https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1187512 -
02:01
- QA on gstg-cny is green again https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1187512 -
04:27
- The deployment to gprd starts https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1187821 -
06:27
- The deployment to gprd finishes https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1187821
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
- Corrective action
- Investigation followup
- Confidential / Support contact
- QA investigation
- Infradev
- Make sure metrics are scraped
Takeaways
- Why did we not see this in CI? Probably again a case where gitlab-org/gitlab#336749 (closed) could have helped.
- How did the repos get in this state?
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Not doing a
RenameRepository
, and trusting the atomicity inRemoveRepository
instead helps not getting into this state. gitlab-org/gitlab#37086 (closed) - Using Praefect with a per-repository election (using PG) in CI gitlab-org/gitlab#336749 (closed) might have helped. I'm marking that as corrective action. It will not fix all-the-things
, but can help identify those errors sooner. - Find and purge any remaining database rows https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15688
- gitlab-org/gitlab#361733 (closed) investigation review-qa-reliable job runs different tests on retry
- gitlab-org/gitaly#4234 Expand the GitLab QA test suite to catch Gitaly errors
- gitlab-org/gitlab#362378 Ensure the 'Availability and Testing' requirement are complied
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- SET on call (Will Meek) was investigating an unrelated test failure, ran a test locally against staging, which failed HTTP 500 and the kibana error
2:accessor call: route repository accessor: consistent storages: repository has no up to date replicas.
Double checked the test command was correct but noticed same error when re-running. Investigated currently running pipelines and noticed many tests failing HTTP 500 error with the same error reported in kibana. Immediately asked in the #staging slack channel. During discussion it was suggested to raise incident as the potential change gitlab-org/gitaly!4518 (merged) was being deployed to production.
- SET on call (Will Meek) was investigating an unrelated test failure, ran a test locally against staging, which failed HTTP 500 and the kibana error
-
How could detection time be improved?
- In terms of SET finding - I'm not sure it could, by luck I found this just before the latest full run test pipeline had completed. SREs may be able to propose kibana alerts?
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)