2021-06-29 Canary deployment failing QA tests
Current Status
A deployment to canary failed the a tests. the test was quarantined and the deployment deployed successfully to production.
Timeline
View recent production deployment and configuration events / gcp events (internal only)
All times UTC.
2021-06-29
-
07:56- @amyphillips declares incident in Slack. -
08:03- @svistas identifies the failing test -
08:19- @svistas creates an MR to quarantine the test whilst investigation takes place. Asks for a maintainer to review -
10:38- @at.ramya merges the quarantine -
16:30- A canary deployment containing this fix is completed. Tests are passing. -
17:48- @mayra-cabrera deploys the package to production
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- ...
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
Summary
A code change wasn't fully backwards compatible and so failed tests on canary. The failing test was identified and quarantined. Once the quarantine MR reached canary the tests passed again and the package was deployed to production.
A follow-up discussion on why the test failed on canary but not on staging (gitlab-org/gitlab#334767 (comment 614891264)) uncovered that a software change wasn't fully backwards compatible. As such this incident is marked as a near miss as we deployed the package to production without being aware of this issue.
- Service(s) affected:
- Team attribution: devopsplan
- Time to detection:
- Minutes downtime or degradation:
Metrics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)