2022-06-21: Long-lived transaction in primary db
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @jeromezng
- Engineer on-call (EOC): @msmiley
Current Status
This incident is not related to an SLO violation. Rather, it is to investigate the cause of a non-SLO alert, which is not currently causing user-facing degradation but could potentially do so under certain conditions.
Specifically this alert is about having a relatively old transaction id (over 500K transactions old), which can degrade the efficiency of cleaning up old row-versions from the database. This can potentially lead to performance regressions under some (but not all) conditions.
Details
When autovacuum runs ANALYZE, it needs a read-consistent view of the table's rows as of a single point in time. In support of that, just like other read-consistent queries (SELECT, etc.), it gets an xid_snapshot. As long as that ANALYZE statement is running, the transaction ids younger than the ones referenced by its xid_snapshot cannot be vacuumed away. This implicitly prevents VACUUM (whether run manually or by autovacuum) from removing recently dead row-versions from any tables in the db, for as long as that ANALYZE command continues to run. And unlike most SQL statements, autovacuum's work is not constrained by statement_timeout, so it can hold its xid_snapshot for much longer than other statements.
For large tables with a high rate of churn, this effect can potentially cause more dead row-versions to accumulate. Usually the side-effects are mild, but in special circumstances it can become severe. (For example, when a frequently scanned range of index keys accumulates a large number of dead tuples, the primary db can set hints to mitigate that overhead until the next vacuum, but the replica dbs cannot. This extra overhead is incurred repeatedly each time an index scan traverses that clump of dead items, and this can lead to significant latency and even CPU starvation in extreme cases. None of that happened this time, but it has in the past and can again.)
As a mitigation, tuning autovacuum to throttle itself less will allow its ANALYZE command to finish quicker, reducing the window of time when other concurrent VACUUM tasks are prevented from cleaning up recently dead tuples. There are also other options, such as partitioning, but the shortest path to improvement is autovacuum tuning.
Summary for CMOC notice / Exec summary:
- Customer Impact: None. This could potentially lead to increased db query durations, but in this case the effect was negligible.
- Service Impact: ServicePostgres
- Impact Duration: 2022-06-21 16:38 to 17:17 UTC (39 minutes)
- Root cause: RootCauseSaturation - Autovacuum's very limited cost budget (a self-throttling mechanism) caused its ANALYZE to run for 39 minutes. During that timespan, many other transactions completed, and none of the dead tuples they generated could be vacuumed until this slow ANALYZE command finished.
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-06-21
-
17:27- @msmiley declares incident in Slack.
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
Takeaways
- ...
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- ...
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)