2022-02-17: Increased number of 500 errors on the production canary
Incident DRI
Current Status
We've seen increased error rates on gitlab-com, having determined it was affecting canary stage, up to 5% of traffic to gitlab-com and those who opted in to using Canary on next.gitlab.com experienced these 500 errors.
A revert of gitlab-org/gitlab!77498 (merged) as a potential fix is currently ongoing with Revert MR - gitlab-org/gitlab!80893 (merged)
More information will be added as we investigate the issue. For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.
Summary for CMOC notice / Exec summary:
- Customer Impact: Affected customers experienced 500 errors when visiting gitlab-com.
- Service Impact: ServiceWeb ServiceAPI ServiceCI Runners
- Impact Duration: 10:15 UTC - 10:28 UTC (13 minutes)
- Root cause: Code change introduced in gitlab-org/gitlab!77498 (merged) resulted in schema mismatch
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
All times UTC.
2022-02-17
-
10:23
- @rehab declares incident in Slack. -
10:30
- Status page updated -
10:30
- @ahmadsherif Drained canary -
10:37
- Incident marked as IncidentMitigated
Takeaways
- ...
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- MR reverted #6372 (comment 847314250)
- Corrective actions to reintroduce 77498 (gitlab-org/gitlab#353186 - closed)
- Investigate similar Staging error to identify corrective actions: Prevent Rails cache key and content change fro... (gitlab-org/gitlab#291067 - closed)
- Ensure future similar issues are caught in Staging
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- Users that were using Canary (enabled manually, visiting internal project, ...)
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- 500 error on all page
-
How many customers were affected?
- 5511 unique IPs (4293 logged-in users) received a 500 during the incident.
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- Less than 5% of web requests (not api or git).
What were the root causes?
- New code that accesses the new column was not hidden behind a feature flag in the model layer. Feature flag was used in helper and view layers.
- The issue was not caught in Staging first because the new code is already “active” in all environments (see previous point)
- Broadcast messages are cached in Redis as serialized data, which was without the new column. Therefore when it was deserialized back to record object, the new column
message.target_access_levels
does not have default value and would returnnil
(nil.empty?
caused the 500) instead of the default value ([]
) from the DB- Stale cache on production caused a schema mismatch between what the code expected and what was in the cache. Causing a 500 error. The code was changed in gitlab-org/gitlab!77498 (merged).
RCA: https://gitlab.com/gitlab-org/gitlab/-/issues/353188
Incident Response Analysis
-
How was the incident detected?
- PagerDuty alerts and internal reports of 500s.
-
How could detection time be improved?
- N/A
-
How was the root cause diagnosed?
- Sentry linked to the offending commit, which linked to the merge request.
-
How could time to diagnosis be improved?
- N/A
-
How did we reach the point where we knew how to mitigate the impact?
- We noticed on dashboards that only the cny stage was affected. So we drained cny to mitigate
-
How could time to mitigation be improved?
- Automatic detection of a bad canary deployment to drain canary?
-
What went well?
- Quick identification of the cause because all the information was present in Sentry
- Quick mitigation after noticing that the problem was isolated to canary by draining.
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- A similar incident would be #3144 (closed)
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)