2020-06-22: Increased Sidekiq mailers error rate
/label incident IncidentActive
Summary
Increased Sidekiq mailers error rate
Mailers queue is growing and error rate is high. Sentry shows many Net::OpenTimeout errors: https://sentry.gitlab.net/gitlab/gitlabcom/issues/1612981/?query=is%3Aunresolved.
We are in this situation since months apparently but just started alerting on mailers sidekiq errors on Friday.
Timeline
All times UTC.
2020-06-22
- 08:33 - mailers queue error rate alert in #alerts-general
- 09:06 - hphilipps declares incident in Slack using
/incident declare
command. - 10:20 -
nat_ports_per_vm
increased from 1024 -> 1536 - 11:00
nat_ports_per_vm
increased again from 1536 -> 2048. The IP pool was scaled from 7 to 14 to accommodate this.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Brent Newton