2020-06-15 - DDoS caused spike in web error rates
Summary
2020-06-15 -DDoS caused spike in web error rates
An apparent DDoS that was detected and mostly mitigated by Cloudflare caused a rise in error rates.
Timeline
All times UTC.
2020-06-15
- 19:45 - Alert fires: Increased Error Rate Across Fleet
- 19:49 - Alert fires: component_error_ratio_burn_rate_slo_out_of_bounds_upper
- 19:59 - alex declares incident in Slack using
/incident declare
command.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Alex Hanselka