2022-05-23: QA tests failing due to delays in Sidekiq processing

Incident Roles

The DRI for this incident is the incident issue assignee, see roles and responsibilities.

Roles when the incident was declared:

  • Incident Manager (IMOC): @afappiano
  • Engineer on-call (EOC): @T4cC0re

Current Status

Deployments to gstg and gstg-cny are being affected by multiple QA failures in staging/staging canary due to runner issues

Full description:

  • Verify Parent-child pipelines dependent relationship parent pipeline fails if child fails
  • Verify Parent-child pipelines independent relationship parent pipelines passes if child passes
  • Package NuGet group level endpoint using ci job token publishes a nuget package at the project endpoint and installs it from the group endpoint

More information will be added as we investigate the issue. For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.

Summary for CMOC notice / Exec summary:

  1. Customer Impact: Human-friendly 1-sentence statement on impacted
  2. Service Impact: service:: labels of services impacted by this incident
  3. Impact Duration: start time UTC - end time UTC ( duration in minutes )
  4. Root cause: TBD

Timeline

Recent Events (available internally only):

  • Deployments
  • Feature Flag Changes
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)
  • Gitlab.com Latest Updates

All times UTC.

2022-05-23

  • 11:38 - qa-reliable tests fail on gstg (following post-deploy migartions)
  • `12:18' - gitlab-org/gitlab#363188 (closed) created as an investigation issue for the test failures
  • 12:30 - qa-reliable tests pass following a retry
  • 14:30 - qa-reliable tests again fail on the next deployment
  • 15:01 - @amyphillips declares incident in Slack.
  • 16:12 - @T4cC0re investigates if there are problems on Staging
  • 17:33 - @afappiano requests help from development teams to investigate failures

2022-05-24

  • 04:47 - @ggillies increases to severity2 as deployments remain blocked
  • 06:21 - @stanhu confirms the problem is related to sidekiq
  • 06:27 - @ggillies identifies a feature flag that changed around the time of the sidekiq apdex drop - refresh_authorizations_via_affected_projects_on_group_membership
  • 06:40 - refresh_authorizations_via_affected_projects_on_group_membership disabled
  • 07:20 - Apdex returns to normal and incident set to IncidentResolved

Create related issues

Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:

  • Corrective action
  • Investigation followup
  • Confidential / Support contact
  • QA investigation
  • Infradev

Takeaways

  • ...

Corrective Actions

Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.

  • ...

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.


Click to expand or collapse the Incident Review section.

Incident Review

  • Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
  • If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
  • Fill out relevant sections below or link to the meeting review notes that cover these topics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. ...
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. ...
  3. How many customers were affected?
    1. ...
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. ...

What were the root causes?

  • ...

Incident Response Analysis

  1. How was the incident detected?
    1. ...
  2. How could detection time be improved?
    1. ...
  3. How was the root cause diagnosed?
    1. ...
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. ...
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

What went well?

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited May 24, 2022 by Amy Phillips
Assignee Loading
Time tracking Loading