Incident Review: 2019-12-27 Spammers causing large mailers Sidekiq queue
Incidents:
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1483 (related RCA: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8694)
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1491
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1493
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1494
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1532
Summary
We experienced several incidents where spam attacks caused large mailers
sidekiq queues.
- Service(s) affected : ServiceSidekiq queuemailers
- Team attribution :
- Minutes downtime or degradation :
We do not have a set SLA for mail queue - for purposes here: using greater than 10 minute latency per job from: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1&from=1577415600000&to=1577448000000&fullscreen&panelId=14
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1483 - 02:25-04:45 - 140 minutes
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1491 - 17:00-23:19 - 379 minutes
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1493 - 11:00-12:07 - 67 minutes
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1494 - 04:00-06:36 - 156 minutes
- https://gitlab.com/gitlab-com/gl-infra/production/issues/1532 - 03:35-07:13 - 218 minutes
For calculating duration of event, use the Platform Metrics Dashboard to look at appdex and SLO violations.
Impact & Metrics
Start with the following:
- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...)
- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...)
- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many attempts were made to access the impacted service/feature?
- How many customers were affected?
- How many customers tried to access the impacted service/feature?
Include any additional metrics that are of relevance.
Provide any relevant graphs that could help understand the impact of the incident and its dynamics.
Detection & Response
Start with the following:
- How was the incident detected?
- Did alarming work as expected?
- How long did it take from the start of the incident to its detection?
- How long did it take from detection to remediation?
- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...)
Root Cause Analysis
The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can never be a person, the way of writing has to refer to the system and the context rather than the specific actors.
Follow the "5 whys" in a blameless manner as the core of the root-cause analysis.
For this it is necessary to start with the incident, and question why it happened. Keep iterating asking "why?" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause.
Keep in min that from one "why?" there may come more than one answer, consider following the different branches.
Example of the usage of "5 whys"
Spam campaigns
- Why? - The battery is dead.
- Why? - The alternator is not functioning.
- Why? - The alternator belt has broken.
- Why? - The alternator belt was well beyond its useful service life and not replaced.
- Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause)
What went well
Start with the following:
- Identify the things that worked well or as expected.
- Any additional call-outs for what went particularly well.
What can be improved
- We should make it harder to create new accounts for spam campaigns.
- We should have stricter limits for issue/notes creation.
- While rate limits are a very good thing to have, we need to be careful with the collateral effects for existing use-cases when enabling them.
- We should have better tooling for cleaning up the sidekiq queue that makes it easier and safe to execute (e.g. preventing to overload Redis).
Start with the following:
- Using the root cause analysis, explain what can be improved to prevent this from happening again.
- Is there anything that could have been done to improve the detection or time to detection?
- Is there anything that could have been done to improve the response or time to response?
- Is there an existing issue that would have either prevented this incident or reduced the impact?
- Did we have any indication or beforehand knowledge that this incident might take place?
Corrective actions
- List issues that have been created as corrective actions from this incident.
- For each issue, include the following:
- - Issue labeled as corrective action.
- Include an estimated date of completion of the corrective action.
- Incldue the named individual who owns the delivery of the corrective action.
-
make it harder to create bogus accounts for abusive operations -
rate limit issue creation (https://gitlab.com/gitlab-org/gitlab/issues/55241) -
make it easier to identify (and kill) bad sidekiq jobs (scalability#9 (closed) for identifying) -
add troubleshooting runbook for dealing with issue spam (disabling mail sending, cleaning up queue/issues, blocking spammers) -
prevent overloading redis master (affecting other queues) when cleaning up the mailers
queue -
make it easier to stop sending out mails -
consider moving the mailers
queue into a separate cluster (delivery#611 (closed) for testing this idea)
-
-
improve protection against spam campaigns: (discussion: https://gitlab.com/gitlab-org/gitlab/issues/103325)