2022-01-26: Flaky git service SLI

Incident DRI

@alejandro

Current Status

The git service is reporting flaky SLI metrics. No user-facing impact detected at the moment, but this leads to inconsistently failing deployment status checks. As we continue investigating we believe that we might actually have a redis-cache latency problem.

  1. Internal Customer Impact:

Currently, this issue has no external user-facing impact, but it does have internal user impacts.

The deployment status checks are failing intermittently, which can lead to the following problems:

  • intermittent Deployment failures
  • flakiness when a user invokes the Slack command /chatops run auto_deploy blockers
  • intermittent failures when toggling feature flag
  • intermittent failures when displaying the automated comment in the change issue process
  1. Internal Customer Impact Duration: started at 11:45 and still ongoing
  2. Current state: IncidentActive
  3. Root cause: still investigating

More information will be added as we investigate the issue.

Timeline

Recent Events (available internally only):

  • Deployments
  • Feature Flag Changes
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)

All times UTC.

2022-01-26

  • 10:29 - @jacobvosmaer-gitlab discovers there are problems with main-git, the apdex is flaky
  • 10:51 - @alejandro observes that the rails_requests SLI apdex is also flaky
  • 11:45 - @alejandro declares incident in Slack.
  • 12:41 - @alejandro updated the incident to an S2 because it was determined that this can block deployments
  • 13:00 - @reprazent and @engwan begin investigating endpoints from the apdex attribution for rails_request Latency Grafana dashboard
  • 13:14 - @engwan identifies slow Rails requests to the git fleet. This seems to happen in spikes and they have an unusually high redis_cache_duration_s
  • 13:21 - @heinrich determines that the high redis cache duration also happens on web and api so seems to be affecting all.
  • 13:33 - @reprazent observes the P99 redis cache duration is unusually high
  • 13:37 - @alejandro observes the redis-cache dashboard is also showing flakyness for latency
  • 13:42 - @alejandro observes the CPU and disk usage is also spiky
  • 18:19 - Incident is relabeled severity3 while the service check is returning green to unblock deployments

2022-01-27

  • 13:17 - @jacobvosmaer-gitlab disables chef-client on the redis sentinel nodes to experiment if the runs are connected to the incident.
  • 13:44 - @reprazent merged Lower the deployment apdex SLO for the git service to lowered the SLO for that particular component while so we do not block deploys or feature flag roll outs
  • 15:41 - Disabling chef-client on the redis sentinel nodes did not solve the problem

Takeaways

  • ...

Corrective Actions

  • gitlab-com/runbooks!4277 (merged)

    • The deployment apdex SLO for the git service was lowered

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.


Click to expand or collapse the Incident Review section.

Incident Review

  • Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
  • If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
  • Fill out relevant sections below or link to the meeting review notes that cover these topics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. ...
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. ...
  3. How many customers were affected?
    1. ...
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. ...

What were the root causes?

  • ...

Incident Response Analysis

  1. How was the incident detected?
    1. ...
  2. How could detection time be improved?
    1. ...
  3. How was the root cause diagnosed?
    1. ...
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. ...
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

What went well?

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited Jan 28, 2022 by Darva Satcher
Assignee Loading
Time tracking Loading