2021-12-13: gstg-cny pods crash-looping
Current Status
Pods in Staging Canary were unable to start due to failing the healthchecks due to a networking issue, where the originating IP of the check is no longer in our monitoring whitelist.
This seems to be the same issue described in https://github.com/cilium/cilium/issues/17839, we expanded the monitoring whitelist to include 169.254.169.252
which resolved the issue.
This is a severity2 incident because this might spread to production if a GKE node upgrade happens there (we had auto-upgrades enabled). We did make a prod change to disable prod node upgrades out of caution while we investigate.
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
All times UTC.
2021-12-11
-
08:00
- auto-upgrade of nodes from K8s version 1.20 to 1.21 in gstg-cny regional cluster leading to failing healthchecks for several services and thus pod crashloops
2021-12-13
-
08:07
- first gstg-cny deployment after the weekend auto-deploy pause is failing -
08:53
- @hphilipps declares incident in Slack. -
10:50
- @hphilipps rolls back gstg-cny to version of gstg - but still seeing healthcheck 404s -
12:25
- MR to add new healthcheck src IP to allowlist: gitlab-com/gl-infra/k8s-workloads/gitlab-com!1403 (merged) -
12:47
- running a helm rollback, as helm is in an inconsistent state in gstg-cny. This fixes the issues with healthchecks. -
12:58
- incident set to mitigated -
14:30
- pre needed a helm rollback too for fixing a setting with gitlab-com/gl-infra/k8s-workloads/gitlab-com!1406 (merged). After that the ip allowlist change got rolled out to gprd. -
15:30
- TF MR for adding a node maintenance exclusion until the end of the year for gprd: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/3233
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- disable gke node auto-upgrade as it could break healthchecks https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/3233
- remove confusing error messages #6057 (moved)
- Disable jaeger on gstg (this never made it to gprd) This was giving us confusing error messages on staging during the investigation gitlab-com/gl-infra/k8s-workloads/gitlab-com!1405 (merged)
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)