2021-09-12: Increased error rate and statement timeout
Current Status
For approximately 20 minutes, Web and API services for GitLab.com were severely degraded due to connection saturation on the database replicas.
We believe at this time this was due to malicious traffic, more details are logged in https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5515 which is confidential.
Timeline
View recent production deployment and configuration events / gcp events (internal only)
All times UTC.
2021-09-12
-
20:52
- Degradation begins due to database saturation on the replicas -
20:55
- @ahanselka declares incident in Slack. -
21:10
- Performance returns to normal
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- internal https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5515#corrective-actions
- https://gitlab.com/gitlab-org/gitlab/-/issues/340716
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
- Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
- If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
- Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- All GitLab.com users
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- Customers may have experiences 500 errors or timeouts, preventing them from using GitLab.com
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- A DDoS attack that was hitting the
gitlab-org/gitlab
project's issue search, which overloaded the PostgreSQL replicas.
Incident Response Analysis
-
How was the incident detected?
- PagerDuty alerts
- How could detection time be improved? 1.
-
How was the root cause diagnosed?
- The root cause was diagnosed via grafana and kibana which showed large increases in statement timeouts and
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- The mitigation was done by CloudFlare automatically.
-
How could time to mitigation be improved?
- ...
-
What went well?
- We had a pretty quick response. The on-call got the alert about the problem before it affected most of the site, when it was only affecting Canary.
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)