Enable external metrics server for Puma
Production Change
Change Summary
As per gitlab-org/gitlab#350548 (closed)
We are extracting the in-process metrics server from Puma into its own process. This has already happened on staging-ref
.
The change is currently behind an environment variable: PUMA_EXTERNAL_METRICS_SERVER
.
Change Details
-
Services Impacted -
webservice
- Change Technician - DRI for the execution of this change
- Change Reviewer - DRI for the review of this change
- Time tracking - Time, in minutes, needed to execute all change steps, including rollback
- Downtime Component - If there is a need for downtime, include downtime estimate here
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Change Steps - steps to take to execute the change
Set the PUMA_EXTERNAL_METRICS_SERVER
env var to something truthy in all environments:
- Staging: gitlab-com/gl-infra/k8s-workloads/gitlab-com!1578 (merged)
- Prod cny: gitlab-com/gl-infra/k8s-workloads/gitlab-com!1609 (merged)
- All envs: gitlab-com/gl-infra/k8s-workloads/gitlab-com!1610 (merged)
Post-Change Steps - steps to take to verify the change
-
Verify metrics are still served for Puma. -
Verify metrics are served from external process ( web_exporter_ruby_*
metrics should exist; these come from the new Ruby process)
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Set the PUMA_EXTERNAL_METRICS_SERVER
env var to something falsey.
Monitoring
Key metrics to observe
- Metric:
web_exporter_ruby_process_start_time_seconds
- Location: https://thanos-query.ops.gitlab.net/graph?g0.expr=avg%20by%20(env)%20(web_exporter_ruby_process_start_time_seconds%7Bapp%3D%22webservice%22%7D)&g0.tab=0&g0.stacked=0&g0.range_input=1h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: it does not exist. If other metrics are still served, it means that we merely failed to start the process but are serving from the in-app server. This might not necessitate a rollback, just investigation and a follow-up fix.
Summary of infrastructure changes
We are starting 1 extra Ruby process per Puma pod. In past measurements I found the extra memory use to be acceptable (around 50M), but it may vary over time and depend on metrics volume.
Change Reviewer checklist
-
The scheduled day and time of execution of the change is appropriate. -
The change plan is technically accurate. -
The change plan includes estimated timing values based on previous testing. -
The change plan includes a viable rollback plan. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details). -
The change plan includes success measures for all steps/milestones during the execution. -
The change adequately minimizes risk within the environment/service. -
The performance implications of executing the change are well-understood and documented. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility? -
The change has a primary and secondary SRE with knowledge of the details available during the change window.
Change Technician checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers
and this issue and await their acknowledgment.) -
There are currently no active incidents.
Edited by Matthias Käppler