2020-08-05 Elevated error rates on gprd canary
Summary
2020-08-05 Elevated error rates on gprd canary
Just after a gprd deploy completed, error rates started climbing from puma and workhorse on gprd-cny. This may correlate with some null byte errors in Sentry, and may indicate a larger problem
Timeline
All times UTC.
2020-08-05
- 22:58 - metrics first breached alerting threshold
- 23:01 - Pager alert fired
- 23:08 - cmiskell declares incident in Slack using
/incident declarecommand. - 23:14 - Problematic source IP identified, candidate for blocking173.212.221.128
- 23:15 - AnthonySandoval joins zoom meeting
- 23:28 - IP block implemented in CloudFlare
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Craig Miskell