2022-07-27: HTTP 502 from API requests with pagination on https://gitlab.com/api/v4/projects
Incident Roles
The DRI for this incident is the incident issue assignee, see roles and responsibilities.
Roles when the incident was declared:
- Incident Manager (IMOC): @splattael, @mkaeppler, @m_gill
- Engineer on-call (EOC): @gsgl
Current Status
At around 2022-07-26 1730 UTC we began seeing an increase in 502 logged errors in CloudFlare. This coordinated with a deploy but errors were not seen in the application. The errors were related to the CSP headers being too large resulting in headers above 4k (Nginx's default proxy-buffer-size setting is 4k). We have resolved this incident by reducing CSP headers and increasing the proxy-buffer-size setting to 8k.
Summary for CMOC notice / Exec summary:
- Customer Impact: Some requests to
/api/v4/projectsare timing out. - Service Impact: ServiceAPI
- Impact Duration: July 26 - 17:30 UTC - July 27 - 14:13 UTC.
- Root cause: Rails upgrade
Timeline
Recent Events (available internally only):
- Deployments
- Feature Flag Changes
- Infrastructure Configurations
- GCP Events (e.g. host failure)
- Gitlab.com Latest Updates
All times UTC.
2022-07-26
-
17:30- increase in 502 logged errors in CF.
2022-07-27
-
05:46- @pguinoiseau declares incident in Slack. -
10:49- gitlab-com/gl-infra/k8s-workloads/gitlab-com!1989 (merged) merged. -
11:25- MR applied -
11:30- 502's drop considerably -
12:46- receiving a report of some 502's for specific paths -
13:35- gitlab-com/gl-infra/k8s-workloads/gitlab-com!1990 (merged) merged -
14:13- Incident has been confirmed as resolved
Create related issues
Use the following links to create related issues to this incident if additional work needs to be completed after it is resolved:
Takeaways
- ...
Corrective Actions
-
Increase the
proxy_buffer_size(done) -
Prevent generating links with all of the params present - For example the
Linkheader can grow quite large, which happens for all endpoints but Project and Group/Project endpoints has a lot of possible params.<https://gitlab.com/api/v4/projects?imported=false&membership=false&order_by=created_at&owned=false&page=1&per_page=20&repository_checksum_failed=false&simple=false&sort=desc&starred=false&statistics=false&wiki_checksum_failed=false&with_custom_attributes=false&with_issues_enabled=false&with_merge_requests_enabled=false>; rel="prev", <https://gitlab.com/api/v4/projects?imported=false&membership=false&order_by=created_at&owned=false&page=3&per_page=20&repository_checksum_failed=false…
- Use a different CSP for API endpoints (in progress)
- Create tests/validation for CSP headers across environments
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
-
Ensure that the exec summary is completed at the top of the incident issue, the timeline is updated and relevant graphs are included in the summary -
If there are any corrective action items mentioned in the notes on the incident, ensure they are listed in the "Corrective Action" section -
Fill out relevant sections below or link to the meeting review notes that cover these topics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
What went well?
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)