2023-09-06: Prometheus server unresponsive in ops environment

Customer Impact

No customer impact. Metrics collection for our internal ops environment was temporarily degraded.

Current Status

Both prometheus servers in the ops environment (not gprd) became unexpectedly unresponsive for a few minutes. This was triggered by an expensive thanos query driving a large spike in memory pressure, leading indirectly to also saturating both CPU usage and disk IO (specifically reads from the ops-data disk, sdb). See summary notes: #16313 (comment 1546298226).

This saturation temporarily degraded metrics collection in the ops environment, causing a gap in metrics collection lasting for 4 minutes and 12 minutes respectively on the two affected prometheus servers.

📚 References and helpful links

Recent Events (available internally only):

  • Feature Flag Log - Chatops to toggle Feature Flags Documentation
  • Infrastructure Configurations
  • GCP Events (e.g. host failure)

Deployment Guidance

  • Deployments Log | Gitlab.com Latest Updates
  • Reach out to Release Managers for S1/S2 incidents to discuss Rollbacks, Hot Patching or speeding up deployments. | Rollback Runbook | Hot Patch Runbook

Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:

  • Corrective action ❙ Infradev
  • Incident Review ❙ Infra investigation followup
  • Confidential Support contact ❙ QA investigation

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in our handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.

Edited Sep 06, 2023 by Matt Smiley
Assignee Loading
Time tracking Loading