[Minor Version] Upgrade Global Search Elasticsearch cluster `gprd-indexing-20220523` to `8.5.3`
Production Change
Change Summary
Related to gitlab-org/search-team/team-tasks#109 (closed)
The Global Search Elasticsearch cluster gprd-indexing-20220523 will be upgraded to Elasticsearch version 8.5.3. The staging cluster gstg-indexing-20220519 will be upgraded first to verify.
Note: This is a Minor Version upgrade so we will follow the steps outlined for that
Change Details
- Services Impacted - ServiceSearch
- Change Technician - @terrichu @dgruzd
- Change Reviewer - @john-mason
- Time tracking - 120 minutes (changes) + 360 minutes (rollback)
- Downtime Component - No downtime required for minor (rolling upgrade) or major (blue/green deployment) version upgrades. However, indexing will be paused during major upgrades to prevent data loss. Pausing indexing means that updates made to database data will not be immediately available to the search service. Once the upgrade is complete, indexing will be unpaused.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 4 hours
Phase 0: Pre-flight
-
Add label C2 for major version upgrades or C3 for minor version upgrades -
Ensure monitoring cluster is compatible with new version, per Elastic upgrade instructions: https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html -
Verify that there are no errors in the Staging or in the Production cluster and that both are healthy -
Verify that there are no alerts firing for the Advanced Search feature, Elasticsearch, Sidekiq workers, or redis -
Set label changein-progress /label ~change::in-progress
Phase 1: Staging Upgrade
-
In the Elastic Cloud UI, create a snapshot - cloud-snapshot-2023.01.05-xz-egtu6rbwyhvtmhvccqw
Major Version Upgrades
- [-] In the Elastic Cloud UI, create a snapshot
- [-] Create new version deployment called
staging-gitlab-com indexing-<CURRENT_DATE>- [-] Tick the box
Restore snapshot data - [-] Select staging deployment in the
Restore fromdropdown - [-] Select new version in the
Versiondropdown - [-] Ensure there is enough capacity for staging at least
120 GB storage | 4 GB RAM | Up to 2.5 vCPUacross two zones - [-] Store username and password of new cluster in 1Password vault
Infra Service - Elasticsearch
- [-] Tick the box
- [-] Make local copy of current Advanced Search settings
- [-] Endpoint, username, password
- [-] Ensure new cluster is added to Monitoring cluster
- [-] Change Advanced Search settings
- [-]
elasticsearch_urlendpoint to new staging endpoint - [-]
elasticsearch_userto new staging user - [-]
elasticsearch_passwordto new staging password - [-] Save changes
- [-]
Minor version upgrades
-
In the Elastic Cloud UI, click Upgrade for the deployment -
Select the version, click the Upgrade button -
In the Elastic Cloud UI, confirm the Upgrade is completed -
Test read code paths -
Code -
Notes
-
-
Test write code paths -
Code -
Notes
-
Major version upgrades
- [-] Test read code paths
- [-] Code
- [-] Notes
- [-] Resume indexing and wait for queue to drain
- [-] Wait until the Sidekiq Queues (Global Search) have caught up
- [-] Test write code paths
- [-] Code
- [-] Notes
Major version upgrades
- [-] Schedule a future change to delete unused deployments after a week
Phase 2: Production Upgrade
-
In the Elastic Cloud UI, create snapshot - cloud-snapshot-2023.01.05-rjfwavz0tv-vbxu_net0da
Major Version Upgrades
-
[-] Add a silence via https://alerts.gitlab.net/#/silences/new with a matcher on the following alert names (link the comment field in each silence back to the Change Request Issue URL)
- [-]
alertname="SearchServiceElasticsearchIndexingTrafficAbsent"-> - [-]
alertname="gitlab_search_indexing_queue_backing_up"-> - [-]
alertname="SidekiqServiceGlobalSearchIndexingApdexSLOViolation"-> - [-]
alertname="SidekiqServiceGlobalSearchIndexingTrafficCessation"->
- [-]
-
[-] Pause indexing in the Advanced Search Admin UI or through the console
::Gitlab::CurrentSettings.update!(elasticsearch_pause_indexing: true) -
[-] Wait 2 mins for queues in redis to drain and for inflight jobs to finish
-
[-] Verify that the Elasticsearch queue increases in the graph
-
[-] Create a new version deployment called
prod-gitlab-com indexing-<CURRENT_DATE>- [-] Tick the box
Restore snapshot data -
] Select [- production deployment in the
Restore fromdropdown - [-] Select
{{ .version }}in theVersiondropdown - [-] Ensure there is enough capacity for staging at least
13.13 TB storage | 448 GB RAM | 69 vCPUacross two zones - [-] Store username and password of new cluster in 1Password vault
Infra Service - Elasticsearch
- [-] Tick the box
-
[-] Change Elastic password offline from Zoom
-
[-] Make local copy of current Advanced Search settings
- [-] Endpoint, username, password
-
[-] Change Advanced Search settings
ApplicationSetting.current.update(elasticsearch_url: ELASTIC_URL, elasticsearch_username: ELASTIC_USER, elasticsearch_password: ELASTIC_PASSWORD)- [-]
elasticsearch_urlendpoint to production endpoint - [-]
elasticsearch_userto production user - [-]
elasticsearch_passwordto production password - [-] Save changes
- [-]
Minor version upgrades
-
In the Elastic Cloud UI, click Upgrade for the deployment -
Select the version, click the Upgrade button -
In the Elastic Cloud UI, confirm the Upgrade is completed -
Test read code paths -
Code -
Notes
-
-
Test write code paths -
Code -
Notes
-
Major version upgrades
- [-] Test read code paths
- [-] Code
- [-] Notes
- [-] Resume indexing in the Advanced Search Admin UI or the rails console
Gitlab::CurrentSettings.update!(elasticsearch_pause_indexing: false) - [-] Wait until the Sidekiq Queues (Global Search) have caught up
- [-] Test write code paths
- [-] Code
- [-] Notes
- [-] Schedule a future change to delete unused deployments after a week
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30-120 minutes (depending on how long indexing was paused)
-
Change Advanced Search settings back to original cluster -
Resume indexing -
Set label changeaborted /label ~change::aborted -
Delete unused deployments (or schedule a future change to delete them if needed for analysis)
Monitoring
Key metrics to observe
sidekiq
- Metric: Search sidekiq indexing queues (Sidekiq Queues (Global Search))
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1
- What changes to this metric should prompt a rollback: Queues not draining
- Metric: Search sidekiq in flight jobs
- Location: https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail?orgId=1&from=now-30m&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-shard=elasticsearch
- What changes to this metric should prompt a rollback: No jobs in flight
- Metric: sidekiq shard (elasticsearch) detail
- Location: https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-shard=catchall&var-shard=elasticsearch
- What changes to this metric should prompt a rollback: After unpausing indexing: Elevated queue length that does not resolve
gitaly
- Metric: gitaly overview dashboard
- Location: https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1
- What changes to this metric should prompt a rollback: After unpausing indexing: Degradation in gitaly service, we may want to only pause indexing again to let gitaly catch up (vs. a full rollback of the change)
PostGreSQL
- Metric: PostGreSQL overview dashboard
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1
- What changes to this metric should prompt a rollback: After unpausing indexing: Degradation in PostGreSQL service, we may want to only pause indexing again to let PostGreSQL catch up (vs. a full rollback of the change)
Performance
- Metric: Search overview metrics
- Location: https://dashboards.gitlab.net/d/search-main/search-overview?orgId=1
- What changes to this metric should prompt a rollback: Flatline of RPS
- Metric: Search controller performance
- Location: https://dashboards.gitlab.net/d/web-rails-controller/web-rails-controller?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-controller=SearchController&var-action=show
- What changes to this metric should prompt a rollback: Massive spike in latency
Elastic Cloud
- Metric: Elastic Cloud outages
- Location: https://status.elastic.co/#past-incidents
- What changes to this metric should prompt a rollback: Incidents which prevent upgrade of the cluster
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.