2020-06-05: Authorized_project job spike delayed pull mirrors
Summary
A load of authorized_projects jobs overwhelmed the catchall sidekiq fleet and delayed pull mirrors for about 30 minutes.
Timeline
All times UTC.
2020-06-05
- 06:20 - In-flight jobs on authorized_projects started climbing rapidly
- 06:24 - authorized_project jobs started queueing up faster than processing
- 06:25 - pull mirrors started to fall behind
- 06:46 - authorized_projects queue drained
- 06:47 - Alert fired
- 06:48 pull_mirrors plateaued and started dropping
- 06:57 - Alert cleared
Incident Review
Summary
- Service(s) affected :
- Team attribution :
- Minutes downtime or degradation :
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
Timeline
- YYYY-MM-DD XX:YY UTC: action X taken
- YYYY-MM-DD XX:YY UTC: action Y taken
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited by Brent Newton