2022-06-21: Site outage due to CDN connectivity issues
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @johnhope
- Engineer on-call (EOC): @pguinoiseau
Current Status
Following a period of disruption we are no longer seeing connectivity issues to GitLab.com after a period of monitoring. The root cause was a service disruption at our DNS provider, Cloudflare. We therefore believe that this incident has been resolved.
Summary for CMOC notice / Exec summary:
- Customer Impact: All frontend services impacted in some countries, between 55-60% of total traffic on GitLab.com
- Service Impact: ServiceAPI ServiceWeb ServiceGit
- Impact Duration: - 06:30 -
08:11
(101 minutes) - Root cause: CDN provider (Cloudflare)
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-06-22
-
06:28
- Incident declared -
06:48
- @pguinoiseau (EOC) reports drop in RPS site-wide but no error or latency issues. Points at Cloudflare. -
06:44
- @mbadeau reports Cloudflare status page updated with service degradation -
06:44
- @jarva silences PagerDuty for 30 mins -
06:58
- Cloudflare reports issue has been identified, working on fix -
07:02
- Users in APAC report GitLab.com back online, RPS to HAProxy gradually increasing -
07:22
- Cloudflare reports fix has been implemented -
07:29
- @pguinoiseau reports traffic back to normal levels -
07:48
- CustomersDot reported to be back online for incident participants -
07:53
- @vitallium confirms CustomerDot returned due to manual action (remove from maintenance mode) -
07:57
- @johnhope marks incident as mitigated -
08:05
- @nolith confirms ops and dev no-longer impacted -
08:11
- Cloudflare report all systems operational -
08:21
- @johnhope marks incident as resolved
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
Takeaways
- ...
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Ensure the Fulfillment escalation process includes reporting updates to relevant, ongoing incidents. customers.gitlab.com was put into maintenance mode during the incident, resulting in 503s continuing to be seen as other services recovered. This was confusing as it wasn't communicated in the main incident issue. (#7288 (comment 998715152))
- Attempt a dry-run bypass of Cloudflare. This action was considered but not taken during the incident because of risk. Cloudflare also manages GitLab DNS so this could've exacerbated the problem. (#7288 (comment 998754068))
-
Add an alert for Cloudflare-specific 500 error. 500 contains
cloudflare
orcloudflare-nginx
in the html of the page. See https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors. (@dnsmichi on Slack) -
Log
CF-IPCountry
header for easier geographical impact assessment. (#7288 (comment 999243700))
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- External Customers
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- 500 errors across GitLab services, full degradation
-
How many customers were affected?
- All customers in affected geographical regions including, but not limited to, USA, Germany, India, New Zealand and Australia.
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- Frontend HAProxy RPS (https) dropped from ~18k (healthy) to ~9k (degraded)
- Frontend HAProxy RPS (API) dropped from ~7.35k (healthy) to ~3k (degraded)
What were the root causes?
- Outage at Cloudflare, which provides reverse proxy, DNS and CDN for GitLab. See Cloudflare Incident 2022-06-21.
Incident Response Analysis
-
How was the incident detected?
- Blackbox probes (internal) paged the EOC
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- Sharp decrease in incoming requests across services with no accompanying metrics or error data implicated Cloudflare (CF). Corroborated by CF status page, Social Media.
-
How could time to diagnosis be improved?
- A Cloudflare-specific alert that detects 500s resulting from Cloudflare service disruption.
-
How did we reach the point where we knew how to mitigate the impact?
- Identification of root cause and establishing that they were taking action on their side.
- Elimination of options on GitLab's side, such as bypassing CF.
-
How could time to mitigation be improved?
- Dry-run of bypassing CF, limiting risk of performing such during an incident.
- Ensuring all incident management processes report status to the main one, to reduce confusion.
-
What went well?
- Detection of CF as root cause 10 minutes after incident declared.
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- Cloudflare had a broad outage in 2019 (ref), however that was prior to our adoption
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- No
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- No
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)