2022-02-06: The rails_primary_sql SLI of the patroni service (`main` stage) has an apdex violating SLO
Incident DRI
Current Status
Degradation on gitlab.com, resulting in flow UI and some UI failures when working with the front end. We've identified a database query that is taking a long time, and is associated with a api endpoint which we have now rate-limited.
This is now mitigated thanks to the rate limiting of the endpoint.
More information will be added as we investigate the issue. For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.
Summary for CMOC notice / Exec summary:
- Customer Impact: gitlab.com users, likely all impacted by slowdowns while the db was under heavy load
- Service Impact: ServicePatroni
- Impact Duration:
20:58-22:44(106minutes) - Root cause: TBD
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
All times UTC.
2022-02-06
-
20:58- Git Service Apdex notification -
21:13- @cmcfarland declares incident in Slack. -
23:15- Incident Mitigated
Takeaways
- ...
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Make Sentry error collector faster for frequent updates https://gitlab.com/gitlab-org/gitlab/-/issues/352196
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- Git, web, and api users would have been impacted.
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- Slow and sluggish response waiting for transactions via git, web, and api services.
- How many customers were affected?
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- We returned roughly 18,000 5xx requests that we probably would not have on a similar day.
- We returned about 50,000 additional slow requests across web/git/api than normal.
What were the root causes?
- Error tracking requests were taking a long time to write into the PostgreSQL table. This caused other rails transactions to take longer to process, generating poor performance for rails services.
Incident Response Analysis
-
How was the incident detected?
- Initially this manifested as poor git performance.
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- Stan Hu understood the nature of the error tracking service and why it's design was limited considering the traffic it was receiving. This helped close the gap on how to mitigate the problem.
-
How could time to mitigation be improved?
- Better deep expertise available during incidents could have made understanding the core problem quicker.
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- We did. But it was unknown at the time what was actually causing the PostgreSQL slowness.
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- No
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)