2022-04-19: QA failing on staging after most recent canary deploy

Incident DRI

Current Status

More information will be added as we investigate the issue. For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.

Summary for CMOC notice / Exec summary:

  1. Customer Impact: n/a
  2. Service Impact: AutoDeploy
  3. Impact Duration: 16:30 - end time UTC ( duration in minutes )
  4. Root cause: RootCauseExternal-Dependency

Timeline

Recent Events (available internally only):

  • Deployments
  • Feature Flag Changes
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)
  • Gitlab.com Latest Updates

All times UTC.

2022-04-19

  • 16:30 - Deploy of package 14.10.202204191520-91f3196a302.6d73c32b941 begins to staging canary - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/1156426
  • 17:03 - First sign of QA failure
  • 18:30 - Notification that QA has failed - Delivery escalates to QA nearly immediately
  • 18:57 - @skarbek declares incident in Slack.
  • 10:03 - Discovery that the login page is not working for the dynamically generated QA user
  • 19:08 - Support indicates that are seeing similar reports for Production - a segregated incident was created for this
  • 19:22 - Feature Flag related to Arkose labs disabled on staging - QA testing is retried
  • 20:04 - All QA is passing incident marked as resolved

Create related issues

Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:

  • Corrective action
  • Investigation followup
  • Confidential / Support contact
  • QA investigation
  • Infradev

Takeaways

  • Visibility from the aspect of QA that something is wrong related to blocking login attempts is not currently easy to troubleshoot

Corrective Actions

Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.

  • ...

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.

Incident Review

  • Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
  • If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
  • Fill out relevant sections below or link to the meeting review notes that cover these topics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. internal customers
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. Auto-deploy was blocked
  3. How many customers were affected?
    1. n/a
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. n/a

What were the root causes?

  • A change from Arkose labs, this was identified in the other incident: #6865 (comment 917326430) created a CSP problem that prevented the login page from working for select users

Incident Response Analysis

  1. How was the incident detected?
    1. QA Failures as designed
  2. How could detection time be improved?
    1. We should consider alerting the Delivery team sooner - currently we auto retry failed QA jobs, but we don't get a notification until after all jobs were retried and jobs are done, which means we could be behind by a whopping 1.5 hours!
  3. How was the root cause diagnosed?
    1. Manual testing against the login page to recreate the issue
  4. How could time to diagnosis be improved?
    1. n/a
  5. How did we reach the point where we knew how to mitigate the impact?
    1. We identified that this was Arkose decently quickly, disabling the feature flag was identified as a potential mitigation
  6. How could time to mitigation be improved?
    1. n/a
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. Yes - kinda, the root cause was slightly different in that it was an implementation issue vs an issue spawning at the vendor
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. No
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. No - see #6865 (comment 917326430) for additional details

What went well?

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited Apr 20, 2022 by John Skarbek
Assignee Loading
Time tracking Loading