2020-05-30: A spike of requests to a single Pages site
Summary
At 14:24 UTC, we started receiving a large number of requests to a single Pages site, maxing out to 19.05K requests. The spike lasted for 2 minutes. We were alerted after the fact.
Timeline
All times UTC.
2020-05-30
- 14:24 - A spike of Pages requests is starting
- 14:26 - The spike ended
- 14:27 - EOC is pages about increased backend errors
- 14:36 - The source of the requests is identified to be CloudFlare
Incident Review
Summary
- Service(s) affected :
- Team attribution :
- Minutes downtime or degradation :
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
Timeline
- YYYY-MM-DD XX:YY UTC: action X taken
- YYYY-MM-DD XX:YY UTC: action Y taken
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)