2022-02-07: High number of queued Sidekiq jobs

See [Confidential] 2022-02-07: High number of queued Sidekiq jobs https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6299 for confidential details

Incident DRI

@rehab

Current Status

This incident caused performance problems related to CI runners and MR commits. We had a high number of queued Sidekiq jobs and users experiencing delays related to growing background job queues. The increased workload has been identified as expected user activity, and our system has automatically scaled the resources to handle the additional workload

Summary for CMOC notice / Exec summary:

  1. Customer Impact: Delays on CI pipelines and MR commits related to growing background job queues, lead to adding a load on Patroni
  2. Service Impact: service::sidekiq
  3. Impact Duration: 10:03 - 11:00 (60minutes)
  4. Root cause: See confidential investigation

image

Timeline

Recent Events (available internally only):

  • Deployments
  • Feature Flag Changes
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)

All times UTC.

2022-02-07

  • 10:00 - One project one project created ~23K of jobs
  • 10:03 - drop in CPU usage for the urgent-cpu-bound deployment between 10:03 and 10:25, At the same time we see the number of Sidekiq Queue jobs rise
  • 10:12 - Sidekiq Deployment and post migrations event occurring simultaneously from 10:12 to 10:14
  • 10:26 - @ahmad declares incident in Slack.
  • 10:42 - Firing 1 - Large amount of Sidekiq Queued jobs
  • 10:54 - SideKiq is processing jobs at a normal rate

Takeaways

  • See [Confidential] 2022-02-07: High number of queued Sidekiq jobs https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6299

Corrective Actions

Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.

  • See [Confidential] 2022-02-07: High number of queued Sidekiq jobs https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6299
  • Make Sidekiq SLIs explorable in the error budget for stage groups dashboard
  • Rate limit webhook execution and backoff

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.

Create a confidential issue


Click to expand or collapse the Incident Review section.

Incident Review

  • Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
  • If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
  • Fill out relevant sections below or link to the meeting review notes that cover these topics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. External Users using shared runners experienced performance degradation during the incident
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. CI pipelines are taking a long time to be created (e.g. on push)
    2. Commits are not showing up in MRs
  3. How many customers were affected?
    1. ...
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. ...

What were the root causes?

  • See [Confidential] 2022-02-07: High number of queued Sidekiq jobs https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6299

Incident Response Analysis

  1. How was the incident detected?
    1. ...
  2. How could detection time be improved?
    1. ...
  3. How was the root cause diagnosed?
    1. ...
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. ...
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

What went well?

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited Feb 10, 2022 by Rehab
Assignee Loading
Time tracking Loading