2021-09-01: QA tests failing with 500 error in canary
Current Status
#5455 (closed) was deployed into canary, and the enablement of feature flag linear_user_manageable_groups
resulted in an elevated rate of Postgres timeouts using this code path. These timeouts led to HTTP 500's which failed QA. Due to this, the feature flag was rolled back. QA was retried, it succeeded.
Timeline
View recent production deployment and configuration events / gcp events (internal only)
All times UTC.
2021-08-31
-
20:15
- gitlab-org/gitlab!68845 (merged) is deloyed - though gated behind a feature flag
2021-09-01
-
11:15
- Feature flaglinear_user_manageable_groups
is enabled: https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues/7096 -
16:20
- First deployment to canary after the above enabled fails QA -
18:26
- Feature flag is disabled: https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues/7103 -
18:39
- QA passes - this incident begins mitigation/resolution procedures
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- We should make it easier to observe failures in QA: https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/997
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- All users had the potential to be impacted
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- An HTTP500
-
How many customers were affected?
- 6 users may have been impacted by this
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- Approximately 62 potential events may be related to this
What were the root causes?
Incident Response Analysis
-
How was the incident detected?
- QA failures during Auto-Deploy against the Canary stage
-
How could detection time be improved?
- Unknown - we saw signs that after the feature flag was enabled tests were successful
-
How was the root cause diagnosed?
- Looking at the backtrace of failures led us to find the suspected code change.
-
How could time to diagnosis be improved?
- n/a
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- Feature Flags continue to serve as an important use-case for mitigating problematic code.
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- Yes
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- No
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)