Skip to content

Enable the Nakayoshi fork for puma in production

Production Change

Change Summary

Configuration changes to environment variables control the startup procedure for Puma to enable the forking mechanism called Nakayoshi. Enabling this will improve the memory usage of puma. This capability was introduced and is further discussed in gitlab-org/gitlab#288042 (closed)

Change Details

  1. Services Impacted - ServiceAPI ServiceWeb
  2. Change Technician - @skarbek @alipniagov
  3. Change Criticality - C4
  4. Change Type - changescheduled
  5. Change Reviewer - @hphilipps
  6. Due Date - 2020-02-17
  7. Time tracking - 15 minutes
  8. Downtime Component - 0

Detailed steps for the change

Pre-Change Steps - steps to be completed before execution of the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 5

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10

Monitoring

Logging

Historically this change invoked a GC failure that introduced nasty behavior in puma. Luckily this can be seen with a massive amount of seg faulting of the application recorded by our puma logs. Monitor for such here: https://log.gprd.gitlab.net/goto/b71d63d5bb9a436f6a2d5c4fb9cb72cb

Key metrics to observe

Note that the behavior exhibited happens at a random point in time. The right things need to occur that caused a GC operation to kick the right set of values out of memory causing said behavior. Due to this pattern, we'd see this behavior one node at a time. Refer to the incident: #3370 (closed) for additional details

Summary of infrastructure changes

  • Does this change introduce new compute instances? no
  • Does this change re-size any existing compute instances? no
  • Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc? no

Changes checklist

  • This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities.
  • This issue has the change technician as the assignee.
  • Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed.
  • Necessary approvals have been completed based on the Change Management Workflow.
  • Change has been tested in staging and results noted in a comment on this issue.
  • A dry-run has been conducted and results noted in a comment on this issue.
  • SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
  • There are currently no active incidents.
Edited by John Skarbek