2020-06-17: Elevated error rates for praefect
/label incident IncidentActive
Summary
Elevated error rates for praefect
We received an alert for the error burn rate of the praefect service
The praefect
service, proxy
component, main
stage has an error burn-rate outside of SLO
The error-burn rate for this service is outside of SLO over multiple windows. Currently the error-rate is 0.7991%%.
As well as an alert reporting a failed pingdom check on gitlab-foss
, which reported a gitaly latency over 8000ms when manually tested
Timeline
All times UTC.
2020-06-17
- 23:44 - @craig receives pages for
component_error_ratio_burn_rate_slo_out_of_bounds_upper
andPingdom check check:https://gitlab.com/gitlab-org/gitlab-foss/ is down
- 23:48 - cbarrett checks ongoing work and verifies no correlation/impact from active Gitaly migrations
- 23:52 - @johncai reports performance degradation in Praefect linked to recent change in #incident-management
- 00:00 - @craig declares incident in Slack using
/incident declare
command. - 00:00 - @craig @johncai @stanhu join situation room
- 00:09 - @stanhu identifies downgrade version, disables chef client, collects profiling info, and downgrades to previous version
- 00:22 - @cwoolley-gitlab reports git fetch/push errors from #handbook-escalation
- 00:24 - @johncai reverts Praefect change gitlab-org/gitaly!2291 (merged)
- 00:26 - @stanhu reports incident mitigation in #handbook-escalation
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?
5 Whys
Lessons Learned
Corrective Actions
Guidelines
Edited by Brent Newton