Sidekiq deployments all use the same selector
This would result in only one set of pods, with different deployments->replicasets fighting over ownership of the pods and up/downscaling the sets if they were configured with a different number of replicas.
According to kubernetes spec, deployments use selectors to find out which pods belong to them, and make decisions based on that.
Currently, the chart creates multiple deployments for sidekiq pods based on the gitlab.sidekiq.pods
array. There is a bug in the chart though, as it doesn't use unique selectors for each deployment: if deployed, they would all select the same pods! A cording to official documentation, that is a dangerous and unsupported behaviour (see also, specifically the note on Writing a Deployment spec > Selectors).
Although it might not break with the default (1 all-serving sidekiq deployment), I'd rather have it not break in the other, more customized cases as well.