Skip to content

Set default DB pool size to 1

Sean McGivern requested to merge set-default-db-pool-size-to-1 into master

The application will already resize this upwards when it's running in a multi-threaded environment like Sidekiq or Puma. Having a default of 10 means that this will often take more connections than we need.

See gitlab-org/omnibus-gitlab!3696 (merged) / gitlab-org/omnibus-gitlab#4631 (closed) for where this was done in Omnibus previously.

To test this, I used a simple chart config:

gitlab:
  sidekiq:
    pods:
      - name: elasticsearch
        cluster: true
        experimentalQueueSelector: true
        queues: 'feature_category=global_search&urgency=throttled'
        concurrency: 2
      - name: catchall
        cluster: true
        experimentalQueueSelector: true
        negateQueues: 'feature_category=global_search&urgency=throttled'
        concurrency: 15

And ran kubectl get pods | grep -o 'gitlab-sidekiq[^ ]*' | xargs -I_ sh -c 'echo _; kubectl exec _ -- tail -n 1 /var/log/gitlab/application.log'

On master:

gitlab-sidekiq-catchall-v1-7fd84b756-b2b8k
2020-07-02T13:37:59.948Z: DB connection pool size: 17 (increased from 10 to match thread count)
gitlab-sidekiq-elasticsearch-v1-7d857cf57d-ljtrs
2020-07-02T13:37:53.511Z: DB connection pool size: 10

On this branch:

gitlab-sidekiq-catchall-v1-bb996bd68-szpz6
2020-07-02T13:50:04.149Z: DB connection pool size: 17 (increased from 1 to match thread count)
gitlab-sidekiq-elasticsearch-v1-8c9f48bb6-4bwcd
2020-07-02T13:47:20.638Z: DB connection pool size: 4 (increased from 1 to match thread count)

So we can see that we got the right size for the catchall pod regardless, but we avoided wasting connections in the elasticsearch pod.

For gitlab-com/gl-infra/scalability#447 (closed).

Edited by Sean McGivern

Merge request reports