Incident Review: GitLab login fails with 429 error
Incident Review
The DRI for the incident review is the issue assignee.
-
If applicable, ensure that the exec summary is completed at the top of the associated incident issue, the timeline tab is updated and relevant graphs are included. -
If there are any corrective actions or infradev issues, ensure they are added as related issues to the original incident. -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers) External customers
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...) It prevented GitLab users that were using OmniAuth logging through GitHub, Salesforce and Bitbucket from logging in.
- How many customers were affected? Hard to assess as some of the errors were on the external servers.
-
What were the root causes?
- October 14th/19th: Production secrets were copied to Vault from the staging GKE cluster instead of the production GKE cluster by accident for gitlab-com/gl-infra/k8s-workloads/gitlab-com!2171 (merged) due to using the wrong
kubectl
context during a manual local operation by an engineer - November 30th: Config change updating the GitLab.com deployment to use the new secrets in CR 8094 (issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16357)
Incident Response Analysis
- How was the incident detected? It was detected by the Support Engineers.
-
How could detection time be improved?
- I would ask QA team about the tests for this (cc @sliaquat)
- corrective action https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16914+ (to detect the discrepancy before using the new secret)
- corrective action https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16915+
- How was the root cause diagnosed? We figured out it was the question of config early on thanks to the presence of @ifarkas from ~"group::authentication and authorization" group on the call. The exact cause was figured out by the SRE.
-
How could time to diagnosis be improved?
- Communicating about the production change in the on-call handover issue and the wider team
- How did we reach the point where we knew how to mitigate the impact? Person who knew the details of the change that caused incident joined the call, explained what happened and created the MR with the fix.
-
How could time to mitigation be improved?
- Familiarize infrastructure team members with the new secrets rotation procedure: added a note about it in the Reliability discussions agenda
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- Sort of related, #8097 (closed) was also caused by a wrong kube context during a local operation
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- No
- Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
What went well?
- The secret rotation procedure with External Secrets and Vault worked well
Guidelines
Edited by Pierre Guinoiseau