2021-02-25: degraded latency on CI shared runners
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Summary
We made progress on the recurring CI shared-runners saturation:
- Scaling the fleet of CI runners was running into a GCP quota limit, leading to the regressions over the last 2+ days.
- We have requested an increase to the relevant quota. That may take a few days to complete.
- As a short-term mitigation, applied a config change to the shared-runners managers, allowing them to proactively spawn a larger pool of idle runner VMs, to act as a buffer and hopefully smooth out the VM creation rate. Per discussion, we think this is likely to at least help and may potentially completely avoid the throttling of VM creation.
I will write a more holistic summary tomorrow, but for now, there are some additional details on what we've uncovered today here: #3800 (comment 517513931)
Timeline
All times UTC.
View recent production deployment and configuration events (internal only)
2021-02-25
-
14:45
- @craigf declares incident in Slack.
Corrective Actions
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Time to detection:
- Minutes downtime or degradation:
Metrics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Incident Review Stakeholders
Edited by Matt Smiley