The public_dashboards_thanos_query SLI of the monitoring service (`main` stage) has an apdex violating SLO

Start time: 20 January 2021, 3:41PM (UTC)
Severity: critical
full_query: ((gitlab_component_apdex:ratio_1h{component="public_dashboards_thanos_query",monitor="global",type="monitoring"} < (1 - 14.4 * 0.001)) and (gitlab_component_apdex:ratio_5m{component="public_dashboards_thanos_query",monitor="global",type="monitoring"} < (1 - 14.4 * 0.001)) or (gitlab_component_apdex:ratio_6h{component="public_dashboards_thanos_query",monitor="global",type="monitoring"} < (1 - 6 * 0.001)) and (gitlab_component_apdex:ratio_30m{component="public_dashboards_thanos_query",monitor="global",type="monitoring"} < (1 - 6 * 0.001))) and on(env, environment, tier, type, stage, component) (sum by(env, environment, tier, type, stage, component) (gitlab_component_ops:rate_1h{component="public_dashboards_thanos_query",monitor="global",type="monitoring"}) >= 1)
Monitoring tool: Prometheus
Description: Thanos query gathers the data needed to evaluate Prometheus queries from multiple underlying prometheus and thanos instances. This SLI monitors the Thanos query HTTP interface for GitLab's public Thanos instance, which is used by the public Grafana instance. 5xx responses are considered failures.

Currently the apdex value is 95.5%.
GitLab alert: https://gitlab.com/gitlab-com/gl-infra/production/-/alert_management/63/details


Summary

More information will be added as we investigate the issue.

Timeline

All times UTC.

`YYYY-MM-DD` - `00:00` - ...

Corrective Actions



Click to expand or collapse the Incident Review section.

Incident Review


Summary

  1. Service(s) affected:
  2. Team attribution:
  3. Time to detection:
  4. Minutes downtime or degradation:

Metrics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. ...
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. ...
  3. How many customers were affected?
    1. ...
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. ...

What were the root causes?

"5 Whys"

Incident Response Analysis

  1. How was the incident detected?
    1. ...
  2. How could detection time be improved?
    1. ...
  3. How was the root cause diagnosed?
    1. ...
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. ...
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. ...

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

Lessons Learned

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)

Incident Review Stakeholders

Assignee Loading
Time tracking Loading