2020-06-13: Error rate across the fleet
/label incident IncidentActive
Summary
2020-06-13 around 23:20PM UTC, 5XX
s started spiking up across API fleet causing alerts / pages. Upon investigating the issue, it was related to a planned, scheduled, approved maintenance activity to do primary switchover: #2195 (closed). Therefore, this incident issue is closed and no incident-review should be necessary for this. (If anyone thinks we still should write up an incident review for this planned activity let me know)
Error rate across the fleet
Error rates have spiked up
Timeline
All times UTC.
2020-06-13
- 23:37 - aamarsanaa declares incident in Slack using
/incident declare
command.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Brent Newton