2020-07-01: Connectivity issues to Docker Hub causing stalled CI jobs on shared runners
Shared-runners jobs piling up
Same incident as we saw yesterday, #2351. We thought it was related to Docker Hub slowness - but they don't acknowledge any of that in their status service. We are also investigating any potential network issues in connecting to Docker Hub.
Query to detect this in the future: (internal) https://thanos-query.ops.gitlab.net/graph?g0.range_input=6h&g0.max_source_resolution=0s&g0.expr=sum(gitlab_runner_jobs%7Bexecutor_stage%3D%22docker_pulling_image%22%7D)%20by%20(env)%0A%3E%0Asum(gitlab_runner_jobs%7Bexecutor_stage%3D%22docker_run%22%7D)%20by%20(env)%0A&g0.tab=0
All times UTC.
- 09:02 - aramos declares incident in Slack using
Click to expand or collapse the Incident Review section.
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?