2021-05-18: Long-running CREATE INDEX blocking vacuum
Current Status
This incident is IncidentMitigated.
~~It was a Near Miss because we came within 12 hours of a full database shutdown due to a long-running migration. The EOC was alerted to dead tuples accumulating in some tables and investigated to find a CREATE
statement that had been running for 5h30m.~~
It was unclear how much longer the transaction would take and the EOC was concerned that transaction id wraparound would occur. The database protects itself from a wraparound state by warning and then halting. At the given transaction rate there would have been a maximum of 15 minutes warning before the database shut down.
The situation was monitored, and the decision was taken to stop the long-running migration. At that time the estimated time until database shutdown was 12 hours.
Later, we found the transaction wrap around time was still 27 days out and we removed the Near Miss notion.
Timeline
View recent production deployment and configuration events (internal only)
All times UTC.
2021-05-15
-
04:00
- Database testing pipeline on the MR was successful: gitlab-org/gitlab!61430 (comment 576084730)
2021-05-18
-
13:30
- transaction begins -
18:33
- @msmiley declares incident in Slack. -
18:35
- @garyh identifies revert procedure -
19:05
- @kerrizor creates the revert MR -
19:36
- @rspeicher notes that we can't just delete the new index on staging, we need to roll back the migration -
19:38
- @kerrizor notes that we can't rundown
on prod because it will have the same performance issue -
20:30
- @dawsmith says we are going to let the index create go through on prod and let the revert go through and mark this as mitigated -
21:42
- @msmiley leaves a summary
Summary of findings and next steps: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4633#note_578818080
Explanation of why the transaction id wraparound problem is a big deal now (in contrast to a year ago, when it wouldn't have been): https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4633#note_578827870
2021-05-19
-
00:36
- @msmiley halts the transaction
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Improve reliability of db migration testing through restoring the Datbase Lab instance: gitlab-org/database-team/gitlab-com-database-testing#3 (closed)
- Wraparound transaction monitoring and alerting: gitlab-cookbooks/gitlab-exporters!224 (merged)
Other Follow-up issues that are not corrective actions
- Review notes table for partitioning: gitlab-org/gitlab#331527 (closed)
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Time to detection:
- Minutes downtime or degradation:
Metrics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)