2024-08-01: [CR] [gstg] Update Postgres CA certificate to the production Teleport CA
Production Change
Change Summary
Since Postgres databases in gstg
have been moved to the production Teleport instance, we were not able to access them using Teleport due to a TLS handshake issue.
Each Postgres box has the following three files:
/var/opt/gitlab/postgresql/cacert
/var/opt/gitlab/postgresql/server.crt
/var/opt/gitlab/postgresql/server.key
These three files are written by gitlab-patroni::postgres recipe. These secrets are NOT read from Vault. Instead, they are retrieved from our old way of managing secrets here.
gitlab-patroni::postgres
recipe is imported by gitlab-patroni::default
recipe which is used by Chef roles.
There is also gitlab-server::postgresql-ca recipe that writes the /var/opt/gitlab/postgresql/cacert
. This recipe reads the CA from Vault. This recipe is added to the following roles:
The certificates in Vault and GCS can be different and depending on the order in which the gitlab-server:postgresql-ca
and gitlab-patroni::default
recipes are run on the final roles, we may end up writing the wrong CA to a patroni
node.
There are two things required for Teleport being able to access a Postgres database.
First, the certificates (gprd and gstg) used by teleport-agent running on Kubernetes clusters should match the following:
/var/opt/gitlab/postgresql/server.crt
Second, the /var/opt/gitlab/postgresql/cacert
certificate on each Postgres node should match the CA certificate of the Teleport instance we want to access the database.
For staging, this certificate should match the production Teleport certificate. Currently, this certificate can be read from two different sources and there is a discrepancy between the two. The certificate in Vault is the production one, but the certificate in GCS is the old one.
In this change request, I am updating the CA in GCS to match the one in Vault and the production Teleport CA. I created this CR for visibility.
Change Details
- Services Impacted - ServicePostgres Environmentgstg
- Change Technician - @miladx
- Change Reviewer - TBD
- Time tracking - 15 minutes
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15
-
Set label changein-progress /label ~change::in-progress
-
Download and decrypt the old certificate: -
Run gcloud storage cp gs://gitlab-gstg-secrets/gitlab-patroni/gstg.enc gstg-old.enc
-
Run gcloud kms decrypt --project=gitlab-staging-1 --location=global --keyring=gitlab-secrets --key=gstg --ciphertext-file="gstg-old.enc" --plaintext-file="gstg-old.json.gz"
-
Run gzip -d gstg-old.json.gz
-
Run cat gstg-old.json | jq -r '."gitlab-patroni".postgresql.ssl_ca' > gstg-postgres-ca
-
-
Download the new CA from Vault: -
Run glsh vault proxy
-
Run vault login -method oidc
-
Run vault kv get -mount="chef" -field="ca" "env/gprd/shared/teleport/cert" > gprd-postgres-ca
-
-
Replace the old CA with the new one: -
Run jq --arg ca "$(cat gprd-postgres-ca)" '."gitlab-patroni".postgresql.ssl_ca = $ca' gstg-old.json > gstg-new.json
-
Run gzip gstg-new.json
-
Run gcloud kms encrypt --project=gitlab-staging-1 --location=global --keyring=gitlab-secrets --key=gstg --plaintext-file="gstg-new.json.gz" --ciphertext-file="gstg-new.enc"
-
Run gcloud storage cp gstg-new.enc gs://gitlab-gstg-secrets/gitlab-patroni/gstg.enc
-
-
Test connectivity to staging Postgres databases: -
Run knife ssh -C 10 'roles:gstg-base-db-postgres' 'sudo chef-client'
-
Run tsh login --proxy=production.teleport.gitlab.net --request-roles=non-prod-database-rw --request-reason="testing"
-
Run tctl request approve "tbd"
-
Run tsh login --proxy=production.teleport.gitlab.net --request-id="..."
-
Run tsh db login --db-user=console-rw --db-name=gitlabhq_production db-main-replica-gstg
-
Run tsh db connect db-main-replica-gstg
-
-
Set label changecomplete /label ~change::complete
Rollback
The two different sources of reading CA need to have the same certificate and the staging Postgres databases need to use the CA for the production Teleport instance. There is really no rollback needed here!
Monitoring
N/A
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.