2021-03-15 Database latency degradation
High-Level Summary
A large number of expensive database queries were running and performing poorly on the primary database. This caused a high load across the database cluster. The load condition saturated available connections to the database leading to a slowdown across the platform. This slowdown meant that at a certain point no requests were served.
We performed an emergency maintenance operation on the database in order to mitigate the load increase. Once the database was capable of serving the queued requests, the platform recovered.
It is suspected that the known bug in PostgreSQL combined with an expensive query are the root cause of this outage.
This issue is essentially the same root issue as occurred in #3875 (closed), and teams are working on priority remediation efforts for these related issues.
Summary
Beginning at 10:10 UTC, degradation of the CI Runners Apdex was seen as the first service impacted by excessive load on the database. Latency began to increase across all services as the database tried to keep up with an increased load.
Around 11:10 UTC 5xx errors began to increase across all services as the database failed to keep up under load and could no longer fully service all requests. At 11:53 UTC, the primary database node went down under load. At 12:16 UTC the primary was restarted and began serving traffic again.
Timeline
View recent production deployment and configuration events (internal only)
All times UTC.
2021-03-15
-
10:10
- degradation of the CI Runners Apdex can be seen to begin -
10:27
- @mwasilewski-gitlab declares incident in Slack. -
11:10
- 5xx errors begin occurring across all services -
11:53
-patroni-03-db-gprd.c.gitlab-production.internal
(primary db) goes down under load -
12:16
-patroni-03-db-gprd.c.gitlab-production.internal
is restored to primary and begins to serve traffic
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Investigate the PG shutdown and no failover https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12838
- re-evaluate our gathering statistics strategy https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12839
- ANALYZE namespaces table twice daily https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12843
- Investigate cpu intensive queries during a database degradation https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12835
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Time to detection:
- Minutes downtime or degradation:
Metrics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)