2022-06-14: Replication lag on server patroni-v12-10-db-gprd.c.gitlab-production.internal:9187 is currently 5m 29s
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @dcroft
- Engineer on-call (EOC): @cmcfarland
Current Status
Patroni-v12-10 has re-synced and seems to be in good working order. Other downstream services (patroni-ci and patroni-ci1) are being re-synced/re-built.
Summary for CMOC notice / Exec summary:
- Customer Impact: None
- Service Impact: ServicePatroni
- Impact Duration: 2022-06-14 17:00 - 2022-06-15 11:19 ( 1080 minutes )
- Root cause: RootCauseConfig-Change
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-06-14
-
17:08
- The v12-10 lag starts to build and passes the threshold for notification -
17:16
- @cmcfarland declares incident in Slack. -
18:03
- The lag alert clears -
18:38
- The replication slot is inactive on the leader node -
22:04
- Stopped and re-started the patroni/postgres services on patroni-v12-10
2022-06-15
-
01:38
- decided to shutdown patroni CI clusters as they were slowing recovery and they're not being used since we're already running #7167 (closed) -
01:41
-patroni-ci1
is shutdown -
03:37
-patroni-ci
is shutdown -
05:37
- finished snapshot ofv12-09
-
06:30
-v12-11
is being created asbackup-replica
so that it can replacev12-10
if it is built faster thanv12-10
can be recovered -
06:40
- deleted the replication slotpatroni_v12_10_db_gprd_c_gitlab_production_internal
allowing the primary to free wal files -
11:19
- The patroni-v12-10 node replication is not lagged
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
Takeaways
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Turbo mode needs to be used on restore command to not fall further behind
- The GCS Snapshot script should not overrun itself when it runs long
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- No impact
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- Pagerduty notification that the replica lag was high 8 minutes after lag started to grow.
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- We looked at anything on the replica that could have caused a slowdown, mostly from the terminal.
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- It was stated multiple times that we were lucky to not be actively using the CI Patroni cluster for CI read data.
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)