2021-07-27 Increased rates of Rails connection pool saturation

Current Status

Gitlab.com has been under a hard PCL the last week due to a number of high severity incidents occurring. This PCL included code deploys. On 2021-07-27 the PCL was lifted and a single deploy to production containing a weeks worth of changes was deployed successfully to production. @stanhu noticed that following the deploy we were experiencing a large number of errors in sentry related to database pool exhaustion. Later EOC @ggillies was paged about sidekiq shard quarantine having an error rate, which was linked to be the same problem.

It was identified that change gitlab-org/gitlab!65262 (merged) had a bug where the total connection pool for the process was being incorrectly calculated as too low.

@tkuah constructed the bugfix for this at gitlab-org/gitlab!66988 (merged)

Timeline

View recent production deployment and configuration events / gcp events (internal only)

All times UTC.

2021-07-27

  • 23:04 - @stanhu declares incident in Slack.
  • 23:36 - @ggillies receives page https://gitlab.pagerduty.com/incidents/PDFE4Y0?utm_source=slack&utm_campaign=channel

2021-07-28

  • 00:12 - @ggillies looks through commits in latest deploy (since last production deploy) and notices change gitlab-org/gitlab!65262 (merged) could be the culprit https://gitlab.slack.com/archives/C0296ABM8EA/p1627431167002100
  • 00:31 - @ggillies engages dev escalation process and @lmejia2 responded and started assisting
  • 01:50 - @tkuah identified the issue and confirmed it was part of change gitlab-org/gitlab!65262 (merged)
  • 02:19 - @tkuah opened MR gitlab-org/gitlab!66988 (merged) to fix the issue
  • 03:12 - @ashmckenzie merged MR gitlab-org/gitlab!66988 (merged)
  • 06:20 - Auto deploy branch https://ops.gitlab.net/gitlab-org/release/tools/-/pipelines/714769 which contained the fix was started
  • 10:24 - The fix makes it to the Canary environment
  • 12:05 - The production deployment completes and @hphilipps confirms we're no longer seeing the issue -#5238 (comment 637532637)
  • 12:23 - Incident marked as IncidentMitigated

Corrective Actions

Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.

  • ...

Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.


Click to expand or collapse the Incident Review section.

Incident Review

Summary

  1. Service(s) affected: All services requesting a database connection
  2. Team attribution: ~"group::sharding"
  3. Minutes downtime or degradation: approx 12 hours from detection to being marked as IncidentMitigated

Metrics

Metrics listed in the comments of this issue

  • Sidekiq quarantine error rate #5238 (comment 636965253)
  • Sidekiq connection errors #5238 (comment 636970465)
  • View of the general rails services during July 26-Jul 28 image

Customer Impact

  1. Who was impacted by this incident? (i.e. external customers, internal customers)
    1. Sidekiq workers that requested a database connection was hitting connection saturation - #5238 (comment 636989314)
    • https://gitlab.com/gitlab-org/release/retrospectives/-/issues/39#note_638243988
    1. Web and API nodes were un-affected, the pool size for those is always 5. See also https://dashboards.gitlab.net/d/alerts-sat_rails_db_connection_pool/alerts-rails_db_connection_pool-saturation-detail?orgId=1&from=1627392600000&panelId=391047339&to=1627415100000&tz=UTC&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=web&var-stage=main&viewPanel=391047339
  2. What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
    1. Some Sidekiq jobs might have been delayed due to error obtaining a connection in time.
  3. How many customers were affected?
    1. It could have potentially affected all customers.
  4. If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
    1. ...

What were the root causes?

Refactor Gitlab::Database to support multiple DBs

  • caching was added in a refactor leading to the dynamic pool size being clobbered. This cause the pool size to revert to the smaller default of 5.
    • Why was this issue not noticed ? Two un-related things interacted (fetching DB config, and setting up GitLab DB load balancing)
    • There was no test checking the expected pool size after setting up DB load balancing (corrective action done in gitlab-org/gitlab!66988 (merged))
    • Our test suite does not set up DB load balancing by default (gitlab-org/gitlab#333184 (closed) is a potential corrective action)

Incident Response Analysis

  1. How was the incident detected?
    1. Connection pool saturation alerts on production
  2. How could detection time be improved?
    1. Suggestions here https://gitlab.com/gitlab-org/release/retrospectives/-/issues/39#note_637958349
    2. We could have responded to a similar alert in Staging - https://gitlab.com/gitlab-org/release/retrospectives/-/issues/39#note_638564504, but there's currently too much alert noise
  3. How was the root cause diagnosed?
    1. Sentry Alerts
  4. How could time to diagnosis be improved?
    1. ...
  5. How did we reach the point where we knew how to mitigate the impact?
    1. Once @ggillies found a potential MR, it was straightforward to test what exact code changes could have led to the pool size reduction
  6. How could time to mitigation be improved?
    1. ...
  7. What went well?
    1. @ggillies and @tkuah did a good job of identifying the root cause and authoring an MR to mitigate.

Post Incident Analysis

  1. Did we have other events in the past with the same root cause?
    1. ...
  2. Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
    1. ...
  3. Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
    1. ...

Lessons Learned

  • ...

Guidelines

  • Blameless RCA Guideline

Resources

  1. If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)
Edited Aug 03, 2021 by Thong Kuah
Assignee Loading
Time tracking Loading