2022-12-01: Gradually increase the number of maxreplicas for 2 Sidekiq shards
Production Change
Change Summary
Gradually increase the number of maxReplicas for 2 shards that are currently using the max number of replicas available sometimes. This is brought up in https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/309#note_1116023566.
-
catchallfrom 300 to 310330 in steps of 10(https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/309#note_1192955270) -
urgent_cpu_boundfrom 157 to 172
Change Details
- Services Impacted - ServiceSidekiq
-
Change Technician -
@reprazent -
Change Reviewer -
@reprazent - Time tracking - N/A
- Downtime Component - N/A
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress -
gitlab-com/gl-infra/k8s-workloads/gitlab-com!2164 (merged) -
gitlab-com/gl-infra/k8s-workloads/gitlab-com!2166 (merged) -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
gitlab-com/gl-infra/k8s-workloads/gitlab-com!2166 (merged) -
gitlab-com/gl-infra/k8s-workloads/gitlab-com!2164 (merged) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: HPA Saturation dashboard
- Location: https://dashboards.gitlab.net/d/alerts-sat_kube_horizontalpodautoscaler/alerts-kube_horizontalpodautoscaler_desired_replicas-saturation-detail?var-environment=gprd&var-type=sidekiq&var-stage=main&var-component=kube_horizontalpodautoscaler_desired_replicas&orgId=1&from=now-2d&to=now
- What changes to this metric should prompt a rollback: This is the metric we're trying to improve. Saturation for
catchallandurgent_cpu_boundwill go down.
-
Metric: pgbouncer backend connections
- Location:
- Primary: https://dashboards.gitlab.net/d/alerts-sat_pgbouncer_async_pool_primary/alerts-pgbouncer_async_primary_pool-saturation-detail?orgId=1
- Secondary: https://dashboards.gitlab.net/d/alerts-sat_pgbouncer_async_pool_replica/alerts-pgbouncer_async_replica_pool-saturation-detail?orgId=1 this is unlikely to saturate
- What: if there's no backend connections, we might have to increase the connection pools. Otherwise jobs will wait obtaining a connection, making it so we don't actually use the newly added pods
- Location:
-
Metric: pgbouncer client connections
- Location:
- What: No client connections could result in errors when executing jobs when no connection is available. The pgbouncers for sidekiq seem to have plenty of headroom
-
Metric: Redis primary CPU
- Location: https://dashboards.gitlab.net/d/alerts-sat_redis_primary_cpu/alerts-redis_primary_cpu-saturation-detail?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=redis&var-stage=main
- What: Check both persistent & cache which are close to saturation. The increased workload could push them over the edge, but it seems unlikely
-
Metric: Kube pool max nodes
- Location: https://dashboards.gitlab.net/d/alerts-sat_kube_pool_max_nodes/alerts-kube_pool_max_nodes-saturation-detail?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=sidekiq&var-stage=main
- What: Make sure there's enough nodes available to schedule the new pods on.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Bob Van Landuyt