2024-05-21: [CR] [gstg] Rotate Certificate Authority for GKE/Kubernetes Cluster
Production Change
Change Summary
The cluster Certificate Authority is going to expire on 2024-06-24. You must rotate the cluster credentials before their expiry to prevent cluster outage. If no action is taken, Google will attempt to rotate the cluster credentials within 30 days of expiry as a last resort to keep the cluster operational. If you have already started the rotation process, please complete the rotation.
The existing Kubernetes CA in gstg-gitlab-gke
is going to expire on Jun 24, 2024.
This requires manual intervention to ensure we rotate the CA without any impact/outage to the workloads.
Timing is important here, GKE automatically starts a CA rotation 30 days before it expires.
You can read more about the CA rotation process in the official GKE documentation.
Change Details
- Services Impacted - ServiceGCP
- Change Technician - @miladx
- Change Reviewer - @jcstephenson @pguinoiseau
- Time tracking - 120 minutes
- Downtime Component - GKE Cluster API Server
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 120
-
Set label changein-progress /label ~change::in-progress
-
Check CA lifetime:
gcloud container clusters describe gstg-gitlab-gke --project=gitlab-staging-1 --region=us-east1 --format="value(masterAuth.clusterCaCertificate)" | base64 --decode | openssl x509 -noout -dates
-
Start the rotation: - This command causes brief downtime for the cluster API server.
gcloud container clusters update gstg-gitlab-gke --project=gitlab-staging-1 --region=us-east1 --start-credential-rotation
-
Verify deployment pipelines are working: -
Recreate the nodes: -
Make sure the version is the same GKE version the cluster already uses. -
gcloud container clusters upgrade gstg-gitlab-gke --project=gitlab-staging-1 --location=us-east1 --cluster-version="1.27.11-gke.1062001" --node-pool="generic-1"
-
gcloud container clusters upgrade gstg-gitlab-gke --project=gitlab-staging-1 --location=us-east1 --cluster-version="1.27.11-gke.1062001" --node-pool="generic-2"
-
gcloud container clusters upgrade gstg-gitlab-gke --project=gitlab-staging-1 --location=us-east1 --cluster-version="1.27.11-gke.1062001" --node-pool="generic-mem-1"
-
gcloud container clusters upgrade gstg-gitlab-gke --project=gitlab-staging-1 --location=us-east1 --cluster-version="1.27.11-gke.1062001" --node-pool="generic-spot-1"
-
gcloud container clusters upgrade gstg-gitlab-gke --project=gitlab-staging-1 --location=us-east1 --cluster-version="1.27.11-gke.1062001" --node-pool="redis-registry-cache-0"
-
-
-
Run a new pipeline here with ENV=gstg
-
https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/pipelines/3288267 -
Ensure the Terraform changes are applied. -
Verify the Vault secret is updated with the new API server endpoint and credentials here.
-
-
Recreate the vault-k8s-secrets-token
secret:-
Run glsh kube use-cluster gstg
-
Run kubectl get secret vault-k8s-secrets-token --namespace=vault-k8s-secrets --output=json | jq 'del(.data)' | kubectl replace --namespace=vault-k8s-secrets --filename -
-
-
Update the ServiceAccount JWT token in Vault: -
Run glsh kube use-cluster gstg
-
Run JWT_TOKEN="$(kubectl get secret vault-k8s-secrets-token --namespace vault-k8s-secrets --output jsonpath='{.data.token}' | base64 --decode)"
-
Run glsh vault proxy
-
Run vault login -method oidc
-
Run vault kv put ci/ops-gitlab-net/gitlab-com/gl-infra/config-mgmt/vault-production/kubernetes/clusters/gstg/gstg-gitlab-gke service_account_jwt="$JWT_TOKEN"
-
Verify the Vault secret is updated with the new service token here.
-
-
Run a new pipeline here with ENV=vault-production
-
https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/pipelines/3288498 -
Ensure the Terraform changes are applied.
-
-
Once again, verify deployment pipelines are working: -
Complete the rotation: - This command might cause a brief downtime for the cluster's API server.
gcloud container clusters update gstg-gitlab-gke --project=gitlab-staging-1 --region=us-east1 --complete-credential-rotation
-
Once again check CA lifetime and verify it is renewed:
gcloud container clusters describe gstg-gitlab-gke --project=gitlab-staging-1 --region=us-east1 --format="value(masterAuth.clusterCaCertificate)" | base64 --decode | openssl x509 -noout -dates
-
Update client credentials: -
Run glsh kube setup
fromrunbooks
repo. -
Run glsh kube use-cluster gstg
. -
Run kubectl get nodes
.
-
-
Finally, verify deployment pipelines are working: -
Post a message in #infrastructure-lounge channel and ask people to do the same:
Hello infra teammates, we have successfully rotated the CA certificate for our regional Kubernetes cluster (`gstg-gitlab-gke`). This operation creates a new control plane with the new address and credentials. Now, it is time to update your Kubernetes API clients by running the `glsh kube setup` command from the `runbooks` repo.
-
Set label changecomplete /label ~change::complete
Rollback
Once we start to recreate the control plane, there is a chance of failure and if that happens, we CANNOT abort or roll back.
- If the control plane creation fails,
Monitoring
-
Check Google Cloud GKE console and ensure there is no red banner saying Rotate cluster credentials ...
. -
Alternatively, you can run the following command and ensure the previously active insight is no longer active.
gcloud recommender insights describe e64c2cc8-9a5b-456f-bdc3-33cd1084eb5d \
--insight-type=google.container.DiagnosisInsight \
--project=gitlab-staging-1 \
--location=us-east1 \
--format=json | jq
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.