2022-01-06: kas.gitlab.com connection error
Current Status
From 14:30 to 18:00 (3 hours and 30 minutes) we experienced an outage with KAS due to an infrastructure update.
The outage occurred when the GitLab Helm Chart was upgraded from 5.5.2
to 5.6.0
in gitlab-com/gl-infra/k8s-workloads/gitlab-com!1437 (merged), which was then reverted with gitlab-com/gl-infra/k8s-workloads/gitlab-com!1438 (merged)
This Ingress Kubernetes object for the ServiceKAS change caused the outage (https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/jobs/5918332):
annotations:
- kubernetes.io/ingress.class: "gce"
+
kubernetes.io/ingress.provider: "nginx"
kubernetes.io/ingress.global-static-ip-name: "kas-gke-gprd"
networking.gke.io/managed-certificates: "kas-gitlab-com"
spec:
+ ingressClassName: gce
rules:
- host: kas.gitlab.com
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: gitlab-kas
port:
The change was first applied in Staging with Kubernetes has deprecated the use of Ingress annotations to set the ingress class object notation group distribution Upgraded the NGINX helm chart to bring in updates to prevent deprecations from preventing future installs of the GitLab Helm chart: team Delivery Needed to update the helm chart in order to bring in changes related to metric configurations for on-going work for enabling gitlab-sshd - These changes were added to the chart after the NGINX upgrade: The deployment of the updated chart executed QA against staging, but the failures related to KAS being unavailable were not noticed Deployment to production revealed a problem gitlab-com/gl-infra/k8s-workloads/gitlab-com!1434 (merged), though due to lack of QA test coverage and lack of alerting for the ServiceKAS endpoint, the outage wasn't noticed there.
Summary for CMOC notice / Exec summary:
- Customer Impact: ServiceKAS
- Customer Impact Duration: 14:30 - 18:00 - 3.5 hours
- Current state: See
Incident::<state>
label - Root cause: RootCauseConfig-Change
Timeline
All times UTC.
2022-01-06
-
13:53
gitlab-com/gl-infra/k8s-workloads/gitlab-com!1437 (merged) merged and applied -
14:30
KAS outage begins -
15:48
Customer opens support ticket noting failures -
16:55
Revert procedure started -
18:00
KAS outage ends
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
Takeaways
- We should have seen this problem in staging. We have corrective action to help us detect this problem. Both from an observability perspective and Quality. QA tests should be running that would have led us to investigate a failure for the KAS agent to be reachable during testing. Observability should have told us that
kas.gitlab.com
and also in our lower environmentkas.staging.gitlab.com
is down.
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Fix QA tests: gitlab-org/gitlab#349705 (closed)
- Promote QA test to
:smoke
or:reliable
: gitlab-org/gitlab#351218 (closed) - Improve rollbacks for Helm Chart updates
- Create manual rollback jobs: delivery#1638
- Improve pipeline creation: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/14910
- Fix Helm chart configuration difference between staging and Production: delivery#2179 (closed)
- Improve monitoring on the ServiceKAS: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/14911
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- External Customers making use of ServiceKAS
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- Deployments and Configuration changes on customer repos will not have occurred.
- Errors in customer logs from the agentk Pods, would look similar to this:
{"level":"error","time":"2022-01-06T16:06:58.975Z","msg":"Error handling a connection","mod_name":"reverse_tunnel","error":"Connect(): rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://kas.gitlab.com\\\": EOF\""}
-
How many customers were affected?
- ~640 Deployed Applications - agentk Connections
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- We serve approximately 10 Requests Per Second, this equates an approximate 126,000 failed requests. It's not clear if all requests result in attempted changes to a customers Kubernetes application. RPS for KAS
What were the root causes?
- Kubernetes has deprecated the use of Ingress annotations to set the ingress class object notation: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
- groupdistribution Upgraded the NGINX helm chart to bring in updates to prevent deprecations from preventing future installs of the GitLab Helm chart: gitlab-org/charts/gitlab#2852 (closed)
-
teamDelivery Needed to update the helm chart in order to bring in changes related to metric configurations for on-going work for enabling
gitlab-sshd
- These changes were added to the chart after the NGINX upgrade: gitlab-org/charts/gitlab!2311 (merged) - The deployment of the updated chart executed QA against staging, but the failures related to KAS being unavailable were not noticed
- Deployment to production revealed the problem that was missed in our staging environment
Incident Response Analysis
-
How was the incident detected?
- Detected by customer via support ticket - @cleveland declared the incident and notified the EOC
-
How could detection time be improved?
- Alerting needs to be created - this would've also helped us detected this problem in staging
-
How was the root cause diagnosed?
- End users reported though support tickets the logs of the agentk Pod and noted connectivity errors to the KAS service
-
How could time to diagnosis be improved?
- Better observability
-
How did we reach the point where we knew how to mitigate the impact?
- Investigation into the timing of the start of the event correlated to the rollout of the helm chart upgrade
-
How could time to mitigation be improved?
- Reverts of the
k8s-worklodas/gitlab-com
repo are sluggish currently - corrective action issue is opened to address
- Reverts of the
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- No
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- No
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)