2021-05-06 Gitaly hooks often receive 404 in /api/v4/post_receive endpoint

Current Status

teamDelivery started serving a small percentage of Canary traffic to the API Service on Kubernetes as part of the Kubernetes migration.

Around 18:00 (https://log.gprd.gitlab.net/goto/6eace831119d7e6170902fdafc399446), we see an uptick of 404 errors to the /post_receive endpoint from the Gitaly hooks:

image

This causes pushes and merging to fail. A retry usually works as the next request is often served by the working VMs.

Draining canary and reverting the Kubernetes setup mitigated the issue.

More information will be added as we investigate the issue.

Timeline

View recent production deployment and configuration events (internal only)

All times UTC.

2021-05-06

  • 18:00 - Canary API traffic is routed GKE as per https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4406
  • 23:35 - @stanhu declares incident in Slack.
  • 23:49 - Cindy drains canary

2021-05-07

Corrective Actions

Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.

  • ...

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.

Incident Review

Summary

  1. Service(s) affected: ServiceAPI
  2. Team attribution: teamDelivery
  3. Time to detection: 335 minutes
  4. Minutes downtime or degradation: 349 minutes

Metrics

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. Internal and External customers; determined by the use of our canary traffic via a cookie as set via next.gitlab.com or project path specific matching as defined by https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/blob/9ea8d785b0854fd27fab1161460494184705a130/roles/gprd-base-lb-fe-config.json#L66
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. Request will have failed with an HTTP404; a git client would've received a failed preRecieve hook failure; merging via the UI will have intermittently failed
  3. How many customers were affected?
    1. Unknown
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. 5,426 unique projects saw at least some impact

What were the root causes?

  • ConfigurationChange via the introduction of our Kubernetes infrastructure for the API service's Canary stage
  • We were missing a configuration that would allow the nginx ingress to route traffic to the appropriate backend
  • Host headers are an important item required to route traffic to specific service inside of the nginx ingress configuration, we did not have an endpoint for int.gprd.gitlab.net, but rather ONLY had one for gitlab.com

Incident Response Analysis

  1. How was the incident detected?
    1. User report
  2. How could detection time be improved?
    1. Appropriate log observation
  3. How was the root cause diagnosed?
    1. Log observation
  4. How could time to diagnosis be improved?
    1. Log observation
  5. How did we reach the point where we knew how to mitigate the impact?
    1. When the issue was first reported, the roll back procedure was immediately performed
  6. How could time to mitigation be improved?
    1. We followed procedure here, n/a
  7. What went well?
    1. Root Cause, Mitigation, and resolution to the problem were all performed w/i a 24 hour period providing limited impact to the desired Change Request. This provided us with the ability to push forward with enabling traffic onto Kubernetes for this workload quickly. We also discovered this issue in canary, where there's a limited amount of traffic coming in. Therefore the impact to production was very small.

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. Not an incident, however, this was a repeat event that was discovered when the service was rolled out in our staging environment. Details of this are here: delivery#1464 (comment 514692797)
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. No
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. Yes - #4406 (closed)

Lessons Learned

  • A more guided and thorough log analysis should have occurred when the service started receiving traffic, while one was completed, it did not go into enough details that could have led to the observation of this incident: #4406 (comment 569148195)
  • QA Should have been run following the enablment - this is not a 100% valid test since some traffic was still enabled on our existing VM fleet
  • An issue should have been created to track the failure we experienced in staging to ensure that the configuration change was carried over into production

Guidelines

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited by John Skarbek