Grow Elasticsearch cluster gitlab-logs-prod from 9 hot nodes to 13
Production Change
Change Summary
In the last few weeks we saw an increase in the logging related alerts and pages. There seem to be multiple problems with different components of the logging pipeline: &684 (closed) . In order to reduce the number of paging alerts so that we can start addressing problems, we want to resize different components. Beats have been moved to an HPA and will be investigated in a separate issue. This change is only concerned with resizing the Elasticsearch production logging cluster.
We currently run 9 nodes per zone in the hot tier, which gives us 18 nodes total. Even though we are not seeing the usual indicators of saturation on ES such as high indexing latency or write rejections, there is still some evidence of resource constraint on selected ES cluster nodes, for example:
We want to reach a point where cpu utilization will be around 60-70%. This is to accommodate for things such as traffic bursts or processing backlog. In order to achieve that, we want to increase the number of elasticsearch hot nodes from 9 to 13 per zone.
Example related incident: #6601 (closed)
Change Details
- Services Impacted - ServiceLogging
-
Change Technician -
@mwasilewski-gitlab - Change Reviewer - @rehab
- Scheduled Time - Mar 23 13:30 UTC
- Time tracking - 36h
- Downtime Component - none
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 1
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 36h
-
Go to https://cloud.elastic.co and login, edit gitlab-logs-prod deployment -
Increase number of nodes per zone in hot tier from 9 to 11 -
cluster resize finishes and shards rebalance (this will likely take a few hours) -
next working day, go to https://cloud.elastic.co and login, edit gitlab-logs-prod deployment -
Increase number of nodes per zone in hot tier from 11 to 13
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 10m
-
check cpu utilization during peak traffic hours doesn't exceed 80%
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Edit deployment and set it back to 9 nodes
Monitoring
Key metrics to observe
- Logging dashboard 1 : https://dashboards.gitlab.net/d/USVj3qHmk/logging?orgId=1&from=now-24h&to=now
- Logging dashboard 2 : https://dashboards.gitlab.net/d/logging-main/logging-overview?orgId=1
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
New compute instances in the Elasticsearch production logging cluster.
Change Reviewer checklist
-
The scheduled day and time of execution of the change is appropriate. -
The change plan is technically accurate. -
The change plan includes estimated timing values based on previous testing. -
The change plan includes a viable rollback plan. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details). -
The change plan includes success measures for all steps/milestones during the execution. -
The change adequately minimizes risk within the environment/service. -
The performance implications of executing the change are well-understood and documented. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility? -
The change has a primary and secondary SRE with knowledge of the details available during the change window.
Change Technician checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncalland this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents.
