2020-06-03: Sidekiq delays - catchall fleet
/label incident IncidentActive
Summary
Sidekiq delays - catchall fleet
Manifested in a page for Large number of overdue pullmirrors
, but is fundamentally a large number of outbound mail jobs keeping the catchall fleet busy.
Timeline
All times UTC.
2020-06-03
- 23:17 - cmiskell declares incident in Slack using
/incident declare
command. - 23:23 - contacted the staff member causing the mailouts
- 23:24 - job shut off; mailer queue starts dropping rapidly, and overdue mirrors graph drops as well.
- 23:31 - mailer queue drained
- 23:33 - overdue mirrors below threshold
- 23:34 - alert clears
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Craig Miskell