Increase thread pool for searches in production Global Search Elasticsearch cluster
Production Change
Change Summary
Related to some performance issues we've been seeing I think increasing the search thread pool size might help gitlab-org/gitlab#292439 (comment 499918783) . Searches are slow, searches queue with very little load and in cluster utilization is quite low so it seems reasonable to increase the thread pool size to get better utilization and faster searches.
Increase the search
Thread pool setting of our production Global search Elasticsearch cluster prod-gitlab-com indexing-20200330
.
Change Details
- Services Impacted - Elasticsearch
- Change Technician - DRI for the execution of this change
- Change Criticality - C3
- Change Type - changeunscheduled, changescheduled
- Change Reviewer - @dgruzd
- Due Date - Date and time (in UTC) for the execution of the change
- Time tracking - Time, in minutes, needed to execute all change steps, including rollback
- Downtime Component - If there is a need for downtime, include downtime estimate here
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 2
-
Run all below steps on staging -
Set the CLUSTER_URL
environment variable based on the URL inGitLab.com > Admin > Settings > Advanced Search
-
Confirm the current thread pool size is 20
:curl "$CLUSTER_URL/_cluster/settings?include_defaults=true" | jq .defaults.thread_pool.search.size
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 5
-
Login to Elastic Cloud -
Edit the deployment for prod-gitlab-com indexing-20200330
cluster -
Add the following configuration:
thread_pool:
search:
size: 50
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 1
-
Confirm the current thread pool size is 50
:curl "$CLUSTER_URL/_cluster/settings?include_defaults=true" | jq .defaults.thread_pool.search.size
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 5
-
Login to Elastic Cloud -
Edit the deployment for prod-gitlab-com indexing-20200330
cluster -
Remove the following configuration:
thread_pool:
search:
size: 50
-
Confirm the current thread pool size is 20
:curl "$CLUSTER_URL/_cluster/settings?include_defaults=true" | jq .defaults.thread_pool.search.size
Monitoring
Key metrics to observe
- Metric: CPU/Memory metrics for the nodes
- Location: https://00a4ef3362214c44a044feaa539b4686.us-central1.gcp.cloud.es.io:9243/app/monitoring#/elasticsearch/nodes?_g=(cluster_uuid:HdF5sKvcT5WQHHyYR_EDcw)
- What changes to this metric should prompt a rollback: Consistently high CPU across nodes indicating that the thread pool count is too high. Or alternatively we might see the rate of garbage collection or memory utilization too high.
- Metric: Grafana Elasticsearch overview
- Location: https://dashboards.gitlab.net/d/search-main/search-overview?orgId=1
- What changes to this metric should prompt a rollback: Saturation metrics getting too close to 100
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
There are currently no active incidents.
Edited by Dylan Griffith