about.gitlab.com was down briefly on March 19th
Context
about.gitlab.com was down on 14:13 UTC, the service recovered shortly afterwards.
Timeline
On date: 2018-03-19
- 14:13 UTC - We got a page that about.gitlab.com is down
- 14:14 UTC - Azure dashboard doesn't report anything wrong with the machine
- 14:15 UTC - Azure metrics for the machine seems to be going south
- 14:16 UTC - SSH-ing to the machine is stuck
- 14:20 UTC - The service recovered on its own
Incident Analysis
- How was the incident detected?
- Is there anything that could have been done to improve the time to detection?
- How was the root cause discovered?
- Was this incident triggered by a change?
- Was there an existing issue that would have either prevented this incident or reduced the impact?
Root Cause Analysis
Follow the the 5 whys in a blameless manner as the core of the post mortem.
For this it is necessary to start with the production incident, and question why this incident happen, once there is an explanation of why this happened keep iterating asking why until we reach 5 whys.
It's not a hard rule that it has to be 5 times, but it helps to keep questioning to get deeper in finding the actual root cause. Additionally, from one why there may come more than one answer, consider following the different branches.
A root cause can never be a person, the way of writing has to refer to the system and the context rather than the specific actors.
For Ex:
At 00:00 UTC something happened that led to downtime
- Why did X caused downtime?
...
What went well
- Identify the things that worked well
What can be improved
- Using the root cause analysis, explain what things can be improved.
Corrective actions
- - Issue labeled as corrective action