2022-01-02: sd-exporter-01-inf-gprd is down

Incident DRI

@alejandro

Current Status

Summary

After memory and disk spikes, a VM stopped reporting metrics. Stackdriver metrics were unavailable during the downtime. Only an internal team (Infrastructure) was impacted.

An attempt was made via the GCP console to force a reset, and after initially failing it was successful. The VM was back up and running immediately after the successful reset. However, after 31 minutes the VM went down again

The VM was reset again. The VM has been monitored for a period of time and appears to be stable. In this case, stable means it is running but still saturated. We will recommend scaling up this VM in the corrective actions.

It looks like the problem has resurfaced for a third time. We are actively working on resizing this VM now.

  1. Customer Impact: ServiceStackdriver
  2. Customer Impact Duration: 67 minutes during the first VM outage and 11 minutes during the second VM outage
  3. Current state: IncidentMitigated
  4. Root cause: RootCauseSaturation

Timeline

Recent Events (available internally only):

  • Deployments
  • Feature Flag Changes
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)

All times UTC.

2022-01-27

  • 10:42 - vm stops reporting metrics after memory and disk spike

imagen

https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?orgId=1&var-env=gprd&var-node=sd-exporter-01-inf-gprd.c.gitlab-production.internal&from=1643271782604&to=1643282562271

  • 11:20 - alejandro declares incident in Slack.
  • 11:20 - @alejandro observes that SSH is hanging
  • 11:24 - @alejandro forced a reset via the GCP Console
  • 11:31 - The reboot is unable to succeed on the first try
  • 11:44 - The reboot is successful
  • 11:49 - @alejandro confirms that the machine is back up and exporter is running. All Alerts have cleared
  • 11:49 - Incident status changed from Active to Mitigated
  • 12:20 - @alejandro The VM went down again
  • 12:21 - Incident status changed from Mitigated to Active
  • 12:27 - Alejandro resets the VM again
  • 12:31 - The VM is up again, @alejandro gathers profile data
  • 13:36 - Incident status changed from Active to Mitigated
  • 15:35 - Instance unresponsive again. Several alerts triggered.
  • 15:36 - Incident status changed from Mitigated to Active
  • 16:17 - Instance resized to n1-standard-2
  • 16:25 - stackdriver_exporter process started.
  • 16:39 - Incident status changed from Active to Mitigated
  • 22:06 - Marcel observes the sd-exporter has stabilized since the instance was resized. We are no longer seeing increased iowait
  • 22:07 - Incident status changed from Mitigated to Resolved

Takeaways

  • ...

Corrective Actions

  • Resize the instance https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/3386
  • Review and cleanup CPU intensive tasks performed by chef-client runs https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/15083
  • Consider upscaling VMs with a single vCPU in production to decrease sensitivity to saturation https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/15084

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.


Click to expand or collapse the Incident Review section.

Incident Review

  • Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary
  • If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section
  • Fill out relevant sections below or link to the meeting review notes that cover these topics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. Infrastructure (Internal customers)
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. Metrics coming from Stackdriver missing
  3. How many customers were affected? 1.
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. All metrics missing during downtime

What were the root causes?

  • ...

Incident Response Analysis

  1. How was the incident detected?
    1. ...
  2. How could detection time be improved?
    1. ...
  3. How was the root cause diagnosed?
    1. ...
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. ...
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

What went well?

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited Jan 28, 2022 by Alejandro Rodríguez
Assignee Loading
Time tracking Loading