2022-03-31: Elevated error rates across GitLab.com WEB service
Incident DRI
Current Status
We observed increased error rates affecting the web
service and users experienced slowness on GitLab.com and/or intermittent 500 errors. The incident was triggered by a k8s-workloads/gitlab-com pipeline run for this MR gitlab-com/gl-infra/k8s-workloads/gitlab-com!1643 (merged) which had a GitLab chart bump that included Kubernetes SVC and Deployment port name changes (related to Prometheus service/pod monitors).
This caused the load balancer in GCP to be re-configured, and it needed some minutes to fully apply that config change causing requests to be slow and some of them to timeout.
The incident was mitigated by draining the clusters and running a controlled rollout of the change.
For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.
Summary for CMOC notice / Exec summary:
- Customer Impact: Users experienced slowness on GitLab.com as well intermittent 50x errors.
- Service Impact: ServiceWeb ServiceAPI ServiceGit ServicePages
- Impact Duration: 78 minutes
- 07:53 UTC - 08:02 UTC (9 minutes)
- 08:05 UTC - 08:38 UTC (33 minutes)
- 08:53 UTC - 09:01 UTC (8 minutes)
- 09:07 UTC - 09:14 UTC (7 minutes)
- 09:55 UTC - 10:16 UTC (21 minutes)
- Root cause: RootCauseConfig-Change
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-03-31
-
07:49
- gitlab-com/gl-infra/k8s-workloads/gitlab-com!1643 (merged) starts deploying togprd-cny
-
07:53
- An uptick of 50x errors -
07:54
- EOC receives a page about increasing error rate -
07:58
- EOC couldn't declare an incident via Woodhouse as the Slash command was unresponsive. -
08:03
- gitlab-com/gl-infra/k8s-workloads/gitlab-com!1643 (merged) starts deploying togprd-us-east1-b
cluster. -
08:03
- EOC declares incident in Slack. -
08:05
- EOC drains canary. -
08:15
- @hphilipps canceling clusterc
and clusterd
deployment jobs before they get started -
08:25
- EOC re-enables canary. -
08:51
- Drained clustergprd-us-east1-d
in HAProxy. -
08:56
- Re-enabled clustergprd-us-east1-d
in HAProxy. -
09:04
- gitlab-com/gl-infra/k8s-workloads/gitlab-com!1643 (merged) inadvertently starts deploying togprd-us-east1-d
cluster. -
09:31
- Merged gitlab-com/gl-infra/k8s-workloads/gitlab-com!1650 (merged) bumping min replicas to >100 -
09:53
- Drained clustergprd-us-east1-c
in HAProxy. -
09:57
- EOC starts deploying gitlab-com/gl-infra/k8s-workloads/gitlab-com!1643 (merged) togprd-us-east1-c
cluster. -
10:07
- Re-enabled clustergprd-us-east1-c
in HAProxy. -
10:19
- Reverted revert min replica scale up - gitlab-com/gl-infra/k8s-workloads/gitlab-com!1652 (merged)
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
- Support contact request
- Corrective action
- Investigation followup
- Confidential issue
- QA investigation
- Infradev
Takeaways
- ...
Corrective Actions
- Implement graceful draining of a whole regional cluster: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15548
- Create a Chatops command that puts PagerDuty into maintenance mode: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15550
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- GitLab.com customers.
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- Customers experienced slowness and received 50x error pages.
-
How many customers were affected?
- We estimate that 173896 unique IPs received a 50x response during the period from 07:53 to 10:10 UTC.
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- N/A
What were the root causes?
- A K8s change that included SVC and Deployment port name changes (related to Prometheus service/pod monitors) caused the load balancer in GCP to be re-configured, and it needed some minutes to fully apply that config change causing requests to be slow and some of them to timeout.
Incident Response Analysis
-
How was the incident detected?
- PagerDuty alert.
-
How could detection time be improved?
- N/A
-
How was the root cause diagnosed?
- The engineer deploying the change noticed the errors and associated them with the change.
-
How could time to diagnosis be improved?
- N/A
-
How did we reach the point where we knew how to mitigate the impact?
- As we knew the root-cause, stopping the pipeline deploying the change was just logical.
-
How could time to mitigation be improved?
- N/A
-
What went well?
- Alerting was on point.
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- No.
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- No.
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)