2020-10-14 Sentry intermittently slow or down
Summary
2020-10-14 Sentry intermittently slow or down
This is exactly the same pathology as discovered yesterday in this incident: #2821 (closed)
Will clean up this latest round of abusive queries and then apply a statement_timeout
to let Sentry automatically recover. Ideally Sentry should not even allow substring searches on correlation_id, but this work-around should limit the impact and let Sentry self-heal after these queries stop arriving.
PagerDuty alert: https://gitlab.pagerduty.com/incidents/P328386
Timeline
All times UTC.
2020-10-15
- 02:44 - msmiley declares incident in Slack using
/incident declare
command. - 03:17 - Configured the
sentry
db user to timeout any query that runs for over 120 seconds. This threshold appears to be far longer than the typical runtime for most HTTP requests (and is longer than the 30-second and 60-second timeouts that appear to constrain at least some long HTTP requests). Having thisstatement_timeout
limit will protect the db from run-away queries and help Sentry recover quickly without intervention for future bursts of requests like the ones that triggered the incidents today and yesterday (#2821 (closed)).
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Matt Smiley