2024-05-08 to 2024-05-10: Add OSQuery to postgres,patroni boxes
Production Change
Change Summary
Change is marked C2 as OSQuery has impacted performance in earlier rollouts.
Change Summary (Planned to deploy in stages from 2024-05-08 to 2024-05-09)
Below are high-level details, but more information can be found in the issue
History
- Initially, when OSQuery was deployed, it restarted all the time, and the context switching was high
- OSQuery gets re-installed/re-configured/restarted every time Chef runs on the node
- Services getting Watchdog Killed multiple times
Change made to overcome the issues
- gitlab-uptycs::remove recipe was causing the services to get uninstalled and installed every time, the recipe was removed
- The OSQuery cookbook was refactored to add flags for testing
- The OSQuery version was updated to 5.1.0
- We have removed snapshot queries that were not required
- We have disabled events collection to reduce the CPU
- Watchdog limit will be raised to 650 MB to reduce the service restarts
We disabled a couple of options in OSQuery, and now it is in shape to get rolled out to Production boxes. The events are disabled as a part of the change, which would reduce the memory usage of OSQuery.
The changes are already tested and deployed in staging issue tracks all the changes and outputs
The change was deployed in monitoring boxes in Prod using the Change Management Issue, and no significant performance impacts were observed.
It is deployed in HA Proxy boxes in Prod using the Change Management Issue, and no significant performance impacts were observed.
It is deployed in Redis boxes in Prod using the Change Management Issue, and no significant performance impacts were observed.
Services | Hosts. | Batch | Planned Date |
---|---|---|---|
patroni-registry-archive | postgres-registry-v14-dr-archive-01-db | 5th batch | 8th May |
patroni-registry-delayed | postgres-registry-v14-dr-delayed-01-db | 5th batch | 8th May |
patroni-ci-archive | postgres-ci-dr-archive-v14-01-db | 5th batch | 8th May |
patroni-ci-delayed | postgres-ci-dr-delayed-v14-01-db | 5th batch | 8th May |
postgres-archive | postgres-dr-main-v14-archive-01-db | 5th batch | 8th May |
postgres-delayed | postgres-dr-main-v14-delayed-01-db | 5th batch | 8th May |
patroni-embedding | patroni-embedding-* | 5th batch | 9th May |
patroni-registry | patroni-registry-v14-* | 5th batch | 9th May |
patroni-ci | patroni-ci-v14-* | 5th batch | 10th May |
patroni | patroni-main-v14-* | 5th batch | 10th May |
Below is the list of services and proposed plan to cover the prod instances
Change Details
- Services Impacted - postgres,patroni,praefect
-
Change Technician -
@ugovindia
- Change Reviewer - @nduff
- Time tracking - 60 minutes
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60 minutes
-
Set label changein-progress /label ~change::in-progress
-
Merge the MR (8th May) -
Merge the MR (9th May) -
Merge the MR (10th May) -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60 minutes
-
Revert the MR (8th May) -
Revert the MR (9th May) -
Revert the MR (10th May) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Node Schedule Waiting Time
- Location: Node Schedule Waiting Time
- What changes to this metric should prompt a rollback: Drastic increase in schedule waiting time.
- Metric: patroni/posgres/prafect osqueryd CPU usage
- Location: patroni osqueryd CPU usage
- What changes to this metric should prompt a rollback: High increase in CPU usage.
- Metric: patroni/posgres/prafect osquery memory usage
- Location: patroni/posgres/prafect osquery memory usage
- What changes to this metric should prompt a rollback: High memory usage above 2GB
- Metric: patroni/posgres/prafect Load average
- Location: patroni/posgres/prafect Load average
- What changes to this metric should prompt a rollback: Increase in load average
- Metric: patroni/posgres/prafect memory usage
- Location: patroni/posgres/prafect memory usage
- What changes to this metric should prompt a rollback: High increase in memory usage
- Metric: OSQuery dashboard
- Location: https://dashboards.gitlab.net/d/fjSLYzRWz/osquery?orgId=1&refresh=5m&var-environment=gprd
Note: Host would only appear below Dashboard after
osquery
is enabled on the host
- Location: https://dashboards.gitlab.net/d/fjSLYzRWz/osquery?orgId=1&refresh=5m&var-environment=gprd
Note: Host would only appear below Dashboard after
- Metric: OOMKill logs
- Location: https://log.gprd.gitlab.net/app/discover#/?_g=()&_a=(columns:!(),filters:!(),index:AWM6inm11NBBQZg_EOxi,interval:auto,query:(language:kuery,query:'%22Memory%20limits%20exceeded%22'),sort:!(!('@timestamp',desc)))
-
What changes to this metric should prompt a rollback: A high number of OOM killsOOM kills would most likely result in further actions like optimizing the queries or the memory limits rather than rolling back the updates.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.