Skip to content

2021-12-24: Add deployment labels to fluentd-archiver

Production Change

Change Summary

Part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/13991

The StatefulSet for fluentd-archiver is missing the deployment labels required by the Thanos rules to provide the necessary metrics to use in the logging dashboard and Prometheus alerts, meaning the service is pretty much not being monitored at all at the moment.

Unfortunately adding labels to the pods in a StatefulSet requires deleting and recreating it, as this is not allowed in an update operation (see https://github.com/kubernetes/kubernetes/issues/90519):

❯ tk diff environments/fluentd-archiver --name fluentd-archiver/pre
The StatefulSet "fluentd-archiver" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
Error diffing: exit status 2

But this is OK to do with this particular service, as it will not affect any other services or users, there will only be a small backlog of undelivered messages in the PubSub subscriptions for fluentd-archiver until the service comes back up and pick them up, and the logs archives will temporarily be late by a few minutes.

Change Details

  1. Services Impacted - ServiceLogging
  2. Change Technician - @pguinoiseau
  3. Change Reviewer - @skarbek
  4. Time tracking - 45 minutes
  5. Downtime Component - fluentd-archiver pods

Detailed steps for the change

Pre-Change Steps - steps to be completed before execution of the change

Estimated Time to Complete (mins) - 5 minutes

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 10 minutes

  • Delete the StatefulSet in all environments (the PVCs are not deleted with it so no loss of buffer data):
    kubectl --context pre-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context ops-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context gstg-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context gprd-gitlab-gke --namespace logging delete sts fluentd-archiver
  • Merge gitlab-com/gl-infra/k8s-workloads/tanka-deployments!269 (merged)

Post-Change Steps - steps to take to verify the change

Estimated Time to Complete (mins) - 30 minutes

  • Check that the pods are starting properly (it will take a while on gprd...):
    kubectl --context pre-gitlab-gke --namespace logging get pods --selector app.kubernetes.io/name=fluentd-archiver --watch
    kubectl --context ops-gitlab-gke --namespace logging get pods --selector app.kubernetes.io/name=fluentd-archiver --watch
    kubectl --context gstg-gitlab-gke --namespace logging get pods --selector app.kubernetes.io/name=fluentd-archiver --watch
    kubectl --context gprd-gitlab-gke --namespace logging get pods --selector app.kubernetes.io/name=fluentd-archiver --watch
  • Verify that the new metrics are showing up in Thanos for all environments:
    count(container_cpu_usage_seconds_total:labeled{deployment="fluentd-archiver"}) by (env)

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10 minutes

  • Open an MR reverting gitlab-com/gl-infra/k8s-workloads/tanka-deployments!269 (merged) and get it approved
  • Delete the StatefulSet in all environments:
    kubectl --context pre-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context ops-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context gstg-gitlab-gke --namespace logging delete sts fluentd-archiver
    kubectl --context gprd-gitlab-gke --namespace logging delete sts fluentd-archiver
  • Merge the revert MR

Monitoring

Key metrics to observe

Given that there are no metrics for this service yet, which is what this change is addressing, there is no dashboard to monitor. But the pods failing to start will prompt a rollback.

Summary of infrastructure changes

  • Does this change introduce new compute instances?
  • Does this change re-size any existing compute instances?
  • Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?

Summary of the above

Change Reviewer checklist

C4 C3 C2 C1:

  • The scheduled day and time of execution of the change is appropriate.
  • The change plan is technically accurate.
  • The change plan includes estimated timing values based on previous testing.
  • The change plan includes a viable rollback plan.
  • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
  • The change plan includes success measures for all steps/milestones during the execution.
  • The change adequately minimizes risk within the environment/service.
  • The performance implications of executing the change are well-understood and documented.
  • The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
  • The change has a primary and secondary SRE with knowledge of the details available during the change window.

Change Technician checklist

  • This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities.
  • This issue has the change technician as the assignee.
  • Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed.
  • This Change Issue is linked to the appropriate Issue and/or Epic
  • Necessary approvals have been completed based on the Change Management Workflow.
  • Change has been tested in staging and results noted in a comment on this issue.
  • A dry-run has been conducted and results noted in a comment on this issue.
  • SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
  • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
  • There are currently no active incidents.
Edited by Pierre Guinoiseau