2022-03-17: Multiple versions of Gitaly have been running alongside one another
Incident DRI
Current Status
This appears to be because we are partially deployed for over an hour. The deploys for Gitaly will not complete. And each time, a different node manifests a unique error.
Summary for CMOC notice / Exec summary:
- Customer Impact: None
- Service Impact: ServiceGitaly
- Impact Duration: 15:47 - 18:57 (200 minutes)
- Root cause: RootCauseIndeterminate
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-03-17
-
15:47
- First Gitaly Deploy failure: #6630 (comment 879010293) -
16:24
- Second Gitaly Deploy failure: #6630 (comment 879008357) -
16:41
- Alerting notifies the EOC multiple versions of Gitaly are running simultaneously -
16:42
- @cmcfarland declares incident in Slack. -
17:19
- Third Gitaly Deploy failure: #6630 (comment 879006561) -
18:57
- Fourth Gitaly Deploy success
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
- Support contact request
- Corrective action
- Investigation followup
- Confidential issue
- QA investigation
- Infradev
Takeaways
- Deploying to Gitaly failed in 4 differing ways:
- CI never started for the first job: gitlab-org/gitlab#341293
- Second failure was the result of the
gitlab-ctl
command appearing to be missing - Third failure was due to Ansible failing to properly gather facts which led to Ansible itself failing due to attribute comparison that could not be met
- Fourth failure was due to a suspect external dependency of the
gitlab-ctl reconfigure
command of omnibus
- During the 5th attempt at a deploy - with zero changes applied to anything, we succeeded
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Make Ansible fact gathering more robust: delivery#2289 (closed)
- Investigate HTTP404 during
reconfigure
: gitlab-org/omnibus-gitlab#6734 - Investigate CI job loss: gitlab-org/gitlab#341293
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- Internal Customers, specifically teamDelivery during AutoDeploy
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- n/a
-
How many customers were affected?
- n/a
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- n/a
What were the root causes?
- See above description
Incident Response Analysis
-
How was the incident detected?
- Page to the EOC
-
How could detection time be improved?
- No.
-
How was the root cause diagnosed?
- Investigating CI logs
-
How could time to diagnosis be improved?
- n/a
-
How did we reach the point where we knew how to mitigate the impact?
- We didn't, we just retried until we ran out of errors.
-
How could time to mitigation be improved?
- corrective action will lead the way in this scenario
-
What went well?
- n/a
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- No
- Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
What went well?
- n/a
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)