2021-09-26: Intermittent networking issues with some shared runner jobs
Current Status
We have been seeing intermittent outbound network connectivity issues especially around accessing package registries and repositories due to a networking issue with our cloud provider.
https://status.cloud.google.com/incidents/QSirAFiyN5yMeeE6GNxq
Known Workarounds
There are two workarounds which have been helping mitigate this while we investigate a root cause:
-
FF_NETWORK_PER_BUILD: 1
- #5590 (comment 687202472) -
docker build --network host ...
- #5589 (comment 687173570) - start
docker:dind
container with custom command with--mtu=1460
or lower, as described at #5590 (comment 688032420)
Timeline
Recent Events (available internally only):
All times UTC.
2021-09-26
-
20:05
- @nnelson declares incident in Slack.
2021-09-27
-
03:19 +1
- @thiagocsf runs docker build tests in an attempt to find error patterns with @devin's help -
04:33 +1
- @thiagocsf suggests that the cause might be a connectivity issue between google cloud and Fastly -
05:11 +1
- @stanhu suggests Runner upgrade from 5 days ago might be a possible cause -
05:30 +1
- upgraded to severity2 after @ggillies confirmed this is now a deployment blocker -
06:10 +1
- public status update by @arihantar -
06:49 +1
- @devin submitted a GCP Support Ticket, link. -
07:11 +1
- to rule-out the Runner upgrade, @jarv is going to downgrade it to 14.2.0 https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/627 -
11:45 +1
- EnabledFF_NETWORK_PER_BUILD
FF globally: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/628. -
12:09 +1
- Receiving reports of a new errorCouldn't connect to Docker daemon
. -
12:47 +1
- Verified reports and change impact. -
13:07 +1
- Revert the change, disablingFF_NETWORK_PER_BUILD
FF globally: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/629. -
13:32 +1
- @rehab submitted a Fastly Support Ticket, link. -
22:30 +1
- @stanhu is experimenting with MTU 1500 setting on google cloud VPC.
2021-09-28
-
03:41 +2
- @stanhu confirms that the MTU changes resolved the issue in his tests (internal slack message) -
03:53 +2
- @thiagocsf created draft CR 5601 to apply Stan's fix. -
15:05 +2
- GCP recognizes a general network connectivity issue that is causing this problem - https://status.cloud.google.com/incidents/QSirAFiyN5yMeeE6GNxq.
2021-09-29
-
10:40 +3
- @tmaczukin closed CR 5601 as we're awaiting GCP fix to be fully rolled out.
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- ...
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)