2022-03-18: Some notification emails are delayed
Incident DRI
Current Status
We're continuing to monitor the delays for outbound email notifications from gitlab.com. We've taken measures to mitigate the incident, the change we implemented could sometimes take hours to propagate across all upstream server, more details on this can be found here: #6633 (comment 884199146).
For customers believed to be affected by this incident, please subscribe to this issue or monitor our status page for further updates.
Quick links
-
📍 Mailgun support ticket: https://app.mailgun.com/app/support/view/2113870. -
📍 Mitigation action: #6633 (comment 884199146).
Summary for CMOC notice / Exec summary:
- Customer Impact: Emails from gitlab.com are delayed
- Service Impact: ~"Service::GitLab Rails"
- Impact Duration: Mar 18, 1:30 UTC - end time UTC ( ongoing )
- Root cause: TBD
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-03-18
-
01:28
- @engwan declares incident in Slack.
2022-03-22
-
13:28
- Incident escalated from severity3 to severity2. -
13:41
- First Status page post. -
14:07
-☎ Escalated issue with Mailgun via phone. -
14:31
- Second Status page post. -
14:55
- Mailgun's IP added to the Allowlist of Google's mail servers. -
15:08
- Third Status page post. -
15:20
-🎧 Mailgun Support joins the team via Zoom. -
16:56
- filtered out emails that are showing large retry counts or failures https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6677 -
17:01
- Incident downgraded to severity3 from severity2 -
17:07
- Status page update - status changed to "Monitoring" from "Investigating" -
20:00
- after waiting for backoff and retry periods to continue to cycle, we notice a decline in the number of temporary and permanent email failures -
21:27
- Second Monitoring Status page update
2022-03-23
-
02:48
- Status page update - status changed to "Resolved" from "Monitoring". Customers still experiencing delivery delays should contact GitLab support. -
20:45
- Two new Mailgun IPs have been added for our e-mail sending, which should hopefully reduce the amount of errors due to "too many connections". They are currently warming up: https://www.mailgun.com/blog/ip-warm-up/
2022-03-24
-
10:00
- We are continuing to monitor and some users may continue to see delays, see #6633 (comment 886860643)
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
- Support contact request
- Corrective action
- Investigation followup
- Confidential issue
- QA investigation
- Infradev
Takeaways
- ...
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- ...
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)