2023-10-30: Roll out experimental rule changes to thanos-staging
Production Change
Change Summary
We are rolling out experimental changes to the thanos-staging environment to begin testing the performance of the new prometheus agent remote write. We've now created an entirely separate staging environment that removes the performance issues that we addressed last time.
Watered down diagram showing the components of staging, and what is shared with production.
- The prometheus agents sending remote write data run out of band to our existing production instances.
- These are not touched by this change.
- The receivers are only utilised currently by staging.
- The only shared components are the storegateways, and their respective memcached instances.
The ruler evaluation flow:
- Experimental rules are deployed as
PrometheusRule
CRD objects in kubernetes. - The
staging
rules load in these CRDs using theprometheus-reloader
sidecar. - The rules then run, querying against the
staging
query endpoints. - From here requests fan out to the
thanos receivers
, and respective storegateways.
The receivers are not currently used in production.
We share the storegateways but query overhead should be limited as these should be caching most data.
Change Details
- Services Impacted - ServiceThanos-staging
-
Change Technician -
@nduff
- Change Reviewer - @stejacks-gitlab
- Time tracking - 24 hours
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress
-
Merge gitlab-com/runbooks!6497 (merged) -
Monitor 24 hours follow the sun (@nduff @hmerscher @stejacks-gitlab) -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Revert gitlab-com/runbooks!6497 (merged) with gitlab-com/runbooks!6519 (merged) -
The runbooks MR can time some time to process, if necessary run the following for immediate removal of the rules:
# connect to ops cluster
glsh kube use-cluster ops
# delete experimental rules
for i in $(kubectl -n thanos-staging get prometheusrule --no-headers -o custom-columns=":metadata.name" | grep experimental); do kubectl -n thanos-staging delete prometheusrule $i; done
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
For all metrics, its advisable to compare current state (last 1 hour) with a picture over a few days. Thanos workloads can change drastically based on adhoc queries, what may appear as a problem initially could be a single bad query.
-
Metric: Storegateway Saturation
- Location: https://dashboards.gitlab.net/goto/YN92kG4Sg?orgId=1
- What changes to this metric should prompt a rollback:
- Memory Saturation hitting the limits.
- Large increase to
P99 get_range
on bucket operations. Only if it persists for a long period, these can be expected but should be short lived.
-
Metric: Storegateway Memcached Saturation
- Location: https://dashboards.gitlab.net/goto/lkILzG4IR?orgId=1
- What changes to this metric should prompt a rollback:
- High rate of evictions as memcached can not cache the new data it needs.
-
Metric: Unexpected Prometheus Load
- Location: thanos
- What changes to this metric should prompt a rollback:
- The important sidecars that would be queried by this change are removed from the staging thanos, in favour of querying the receivers directly. However we should check for any unexpected change in CPU usage on our prometheus instances.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.