consul: SSL certs expired on Aug 3
Incident issue: production#1037 (closed)
Summary
A brief summary of what happened. Try to make it as executive-friendly as possible.
Impact & Metrics
Start with the following:
- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...)
- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...)
- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many attempts were made to access the impacted service/feature?
- How many customers were affected?
- How many customers tried to access the impacted service/feature?
Include any additional metrics that are of relevance.
Provide any relevant graphs that could help understand the impact of the incident and its dynamics.
Detection & Response
Start with the following:
- How was the incident detected?
- Did alarming work as expected?
- How long did it take from the start of the incident to its detection?
- How long did it take from detection to remediation?
- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...)
Timeline UTC
- 2019-08-07 2pm - While on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7066, we make the observation that a patroni host doesn't re-join the consul cluster after a restart automatically.
- 2019-08-07 2:37pm - Escalation on Slack https://gitlab.slack.com/archives/CB3LSMEJV/p1565188642498400?thread_ts=1565186678.494500&cid=CB3LSMEJV, paging EOC and declaring an incident in production#1037 (closed)
- 2019-08-07 3:14pm - Paging OnGres - no response, Pagerduty alert escalates to @ansdval after 20 minutes.
- 2019-08-07 3:? - We give up on looking for the CA keys and instead decide to go with a new CA
- 2019-08-07 4:30pm - We're about to generate a new pair of keys
Root Cause Analysis
The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can never be a person, the way of writing has to refer to the system and the context rather than the specific actors.
Follow the "5 whys" in a blameless manner as the core of the post mortem.
For this it is necessary to start with the incident, and question why it happened. Keep iterating asking "why?" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause.
Keep in mind that from one "why?" there may come more than one answer, consider following the different branches.
Example of the usage of "5 whys"
The vehicle will not start. (the problem)
- Why? - The battery is dead.
- Why? - The alternator is not functioning.
- Why? - The alternator belt has broken.
- Why? - The alternator belt was well beyond its useful service life and not replaced.
- Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause)
What went well
Start with the following:
- Identify the things that worked well or as expected.
- Any additional call-outs for what went particularly well.
What can be improved
Start with the following:
- Using the root cause analysis, explain what can be improved to prevent this from happening again.
- Is there anything that could have been done to improve the detection or time to detection?
- Is there anything that could have been done to improve the response or time to response?
- Is there an existing issue that would have either prevented this incident or reduced the impact?
- Did we have any indication or beforehand knowledge that this incident might take place?
Corrective actions
- A newer version of consul would have allowed us to just reload the server to use the new certificate (without restarting it). https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6774
- There is no monitoring place for checking the validity of the consul cert.