2022-01-21 Intermittent failures while fetching CI and git related data
Current Status
A change to rate limiting logic resulted in false positive events, resulting in a higher level of 429s than intended.
Summary for CMOC notice / Exec summary:
- Customer Impact: Users would intermittently see the errors in the web interface when loading a MR, editing a file or loading issue boards or epics. This impacted approx .5% of requests to graphql endpoint.
- Customer Impact Duration: 09:43 - 13:07 UTC (3 hours and 24 minutes)
- Current state: See
Incident::<state>
label - Root cause: RootCauseSoftware-Change Change made in gitlab-org/gitlab!78082 (merged)
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
All times UTC.
2021-12-16
- original MR editing rate limiting feature is opened
2021-12-22
-
16:16
- QA show's signs of failing due to rate limiting: https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines/952000 -
21:46
- gitlab-org/gitlab!76965 (merged) is identified as possible root cause #6108 (comment 793186307). A revert MR is opened -
23:12
- Revert MR is merged
2021-12-23
-
02:20
- Coordinated pipeline with revert MR starts deploy to staging https://ops.gitlab.net/gitlab-org/release/tools/-/pipelines/952566 -
03:03
- QA on staging finishes successfully https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines/952819
2022-01-20
-
13:10
- MR editing rate limiting feature is enabled for merge -
21:51
- first QA pipeline with 429 failures in staging is starting
2022-01-21
-
01:59
- @ggillies notices that QA pipelines on agitlab-com
change unrelated to deployments are failing https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/pipelines/998715 -
02:05
- incident is opened for staging: #6206 (closed) -
08:36
- rack attack rate limiting exclusion MR forgitlab-qa
andgitlab-qa-bot
is merged: gitlab-com/gl-infra/k8s-workloads/gitlab-com!1476 (merged) -
09:00
- QA passes -
09:11
- deployment to gprd-cny starts: https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/999433 -
09:37
- deployment to gprd-cny k8s finishes: https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/jobs/6074806 -
09:43
- first errors in production happen -
11:19
- deployment to main stage production starts: https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/999650 -
12:26
- @marin declares incident in Slack. Unable to load MR widgets in https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/1224 intermittently, as well as failing to load repository for https://gitlab.com/gitlab-org/gitlab . -
12:30
- deployment to regional cluster in production starts: https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/jobs/6077261 , errors start skyrocketing and hitting a significant number of users: https://log.gprd.gitlab.net/goto/6906d6b0-7ac3-11ec-a649-b7cbb8e4f62e -
12:40
- the MR that caused the problem is identified -
12:56:10
- protected paths rate limit is increased from 10 to 100 -
13:07:20
- rate limit is further increased to limit impact on users
src: https://log.gprd.gitlab.net/goto/ac955b40-7ac8-11ec-a649-b7cbb8e4f62e
Takeaways
- ...
Corrective Actions
- after the fix is in production, decrease the rate limit to its original value: #6207 (comment 817798370)
- roll back the change that lifted rate limits for the qa user: #6207 (comment 817780183)
- rack attack log format differs from Rails log format which might be confusing: #6207 (comment 817864827)
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- external customers
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- Users would intermittently see the errors in the web interface when loading a MR, editing a file or loading issue boards or epics.
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- One of the engineers noticed intermittent errors when browsing
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- An engineering noticed a commit on master related to rate limiting
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited by Steve Loyd