2024-01-22: Reindex wiki index to resize the shards count
Production Change
Change Summary
We want to resize the wikis index shard sizes. This is the GitLab issue Update shard size and reindex (gitlab-org/gitlab#414638 - closed)
Change Details
- Services Impacted - ServiceElasticsearch
- Change Technician - @dgruzd
- Change Reviewer - @rkumar555
- Time tracking - 120 minutes
- Downtime Component - No downtime
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 120
-
Set label changein-progress /label ~change::in-progress
-
Add a silence via https://alerts.gitlab.net/#/silences/new with a matcher on alert name: env="gprd",alertname="SearchServiceElasticsearchIndexingTrafficAbsent",alertname="gitlab_search_indexing_queue_backing_up", andalertname="SidekiqServiceGlobalSearchIndexingApdexSLOViolation"-> https://alerts.gitlab.net/#/silences/ab663fa1-73bd-4409-8447-2d3d55f1d252 -
Wikis: Take a screenshot of index advanced metrics for last 7 days and attach to an Internal comment on this issue -
Update the number of shards for the affected indices
Elastic::IndexSetting.find_by(alias_name: 'gitlab-production-wikis').update!(number_of_shards: 5)
-
Trigger re-index and note the timestamp when it was triggered -> 2024-01-22 15:27 UTC
Elastic::ReindexingTask.create!(targets: %w[Wiki], max_slices_running: 30, slice_multiplier: 1)
-
Monitor the status of the reindexing through rails console Elastic::ReindexingTask.current -
Ensure that it has finished successfully -
Note the time when the task finishes -> YYYY-MM-DD HH:MM UTC -
Wait until the backlog of incremental updates gets below 10,000 - Chart
Global search incremental indexing queue depthhttps://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1
- Chart
-
Remove the alert silences
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60
- If the ongoing reindex is consuming too many resources it is possible to throttle the running reindex :
- You can check the index write throughput in ES monitoring to determine a sensible throttle. Since it defaults to no throttling at all it's safe to just set some throttle and observe the impact
curl -XPOST "$CLUSTER_URL/_reindex/$TASK_ID/_rethrottle?requests_per_second=500
- If reindexing task fails, it will automatically revert to the original index
- If reindexing task completes, but you need to rollback.
-
Pause indexing (if it’s not paused already) -
Switch the alias back to the original index -
Ensure any updates that only went to Destination index are replayed against Source Cluster by searching the logs for the updates https://gitlab.com/gitlab-org/gitlab/-/blob/e8e2c02a6dbd486fa4214cb8183d428102dc1156/ee/app/services/elastic/process_bookkeeping_service.rb#L23 and triggering those updates again using ProcessBookkeepingService#track as well as any updates that went through sidekiq workers ElasticWikiIndexWorker,ElasticDeleteProjectWorker,Search::Wiki::ElasticDeleteGroupWorker
-
-
Delete incomplete indices by running
curl -XDELETE "$CLUSTER_URL/gitlab-production-wikis-20221020-2340"` # (The suffix number will be different)
curl -XDELETE "$CLUSTER_URL/gitlab-production-users-20221020-2340"` # (The suffix number will be different)
curl -XDELETE "$CLUSTER_URL/gitlab-production-projects-20221020-2340"` # (The suffix number will be different)
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Elasticsearch cluster health
- Location: https://00a4ef3362214c44a044feaa539b4686.us-central1.gcp.cloud.es.io:9243/app/monitoring#/overview?_g=(cluster_uuid:HdF5sKvcT5WQHHyYR_EDcw)
- What changes to this metric should prompt a rollback: Unhealthy nodes/indices that do not recover
- Metric: Elasticsearch monitoring in Grafana
- Metric: Indexing queues
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1
- What changes to this metric should prompt a rollback: After unpausing the indexing is failing and the queues are constantly growing
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Dmitry Gruzd