2022-05-09: Multiple reports of failed pipelines due to fatal error "SSL certificate problem: unable to get local issuer certificate"
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @dcroft
- Engineer on-call (EOC): @cmcfarland
Current Status
Private runner jobs are having trouble verifying the issuer of the gitlab.com certificate. GitLab SaaS shared runners were not affected by this incident.
Summary for CMOC notice / Exec summary:
- Customer Impact: 25+
- Service Impact: ServiceCloudflare (SSL certificates) / ServiceCI Runners (private)
- Impact Duration: 15:42 - 16:59 (77 minutes)
- Root cause: {#7009 (closed)}
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-05-09
-
15:42
- @izzyfee declares incident in Slack. -
16:02
- Investigating customer reported failure to evaluate runner restart as work around. -
16:14
- We checked the certificate authority used and are confident that this isn't contributing -
16:36
- Multiple confirmations that restarting runners resolves from customers -
16:59
- Confirmed that restarting runners resolves this issue, marking as IncidentResolved
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
Takeaways
- This would have been an issue with the automatic certificate change anyway, the config change triggered it earlier.
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- GitLab.com SaaS customers with customer-supplied CI runners
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- For the majority of affected users jobs would fail and a subsequent restart of the job would work.
- For affected users with a more complex setup which reuses a runner VM or pod, jobs would fail until said VM or pods were recreated and the GitLab runner manager was restarted.
-
How many customers were affected?
- Precise number unknown
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- See 1.
What were the root causes?
- The certificate’s chain of trust was updated to use a different root and intermediate CA.
- GitLab runner will connect to the GitLab instance configured to download the certificate chain and inject it into the job environment.
- Because the chain of trust was updated, there was a race-condition between this download of the certificate chain and the job’s execution. This caused the job to fail certificate validation.
- For GitLab runners configured to reuse VMs and/or pre-seed VMs, this certificate chain is downloaded and injected at creation time, extending the window where the certificate validation would fail. This was an unknown bevaiour at the time of executing the change. It was assumed, the certificate chain is freshly fetched for every job.
Incident Response Analysis
-
How was the incident detected?
- Customer reports
-
How could detection time be improved?
- …
-
How was the root cause diagnosed?
- Every root cause but the last one mentioned above was known. The last root cause was deduced by validating theories against customer reports.
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- After the last cause became clear, we had a workaround available.
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- #7009 (closed)
- The initial impact of the change was known, however an unknown behaviour in GitLab runner extended the known impact for some customers.
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)