Update memory request and limit rke2-ingress-nginx-controller
What does this MR do and why?
This MR updates the memory request and limit for the rke2-ingress-nginx-controller pods to handle increased memory usage during traffic spikes.
Changes:
Memory Request: Increased from 90Mi to 300Mi to provide better headroom for observed memory usage while maintaining efficient scheduling.
Memory Limit: Set to 1Gi to accommodate potential spikes and prevent OOM (Out of Memory) issues.
This change enhances pod stability and ensures that it has the necessary resources to handle load without excessive memory allocation.
Observations:
In Issue #2048 (closed), the initial memory request was 90Mi, and observed usage was 350% higher in an environment with an unloaded management cluster and a single workload cluster attached.
In my environment, which consists of an unloaded management cluster without any workload clusters attached, I observed the following memory usage:
kubectl top pod -n kube-system | grep rke2-ingress-nginx
rke2-ingress-nginx-controller-l89w6 2m 147Mi
rke2-ingress-nginx-controller-vddvk 2m 152Mi
rke2-ingress-nginx-controller-wzrw6 2m 151Mi
This shows that even in an idle state, memory usage is significantly higher than the previous request value.
Decision to Adjust Request to 300Mi:
Initially, 500Mi was considered based on the 350% increase from 90Mi.
However, since requests do not limit actual usage, a lower request value improves scheduler efficiency without impacting performance.
300Mi is already 2x the observed base usage 150Mi, making it a more balanced choice without unnecessary over-provisioning. The limit remains at 1Gi to handle potential traffic spikes.
Related issue:- #2048 (closed)
Related reference(s)
Test coverage
CI configuration
Below you can choose test deployment variants to run in this MR's CI.
Click to open to CI configuration
Legend:
| Icon | Meaning | Available values |
|---|---|---|
| Infra Provider |
capd, capo, capm3
|
|
| Bootstrap Provider |
kubeadm (alias kadm), rke2
|
|
| Node OS |
ubuntu, suse
|
|
| Deployment Options |
light-deploy, dev-sources, ha, misc
|
|
| Pipeline Scenarios | Available scenario list and description |
-
🎬 preview☁️ capd🚀 kadm🐧 ubuntu -
🎬 preview☁️ capo🚀 rke2🐧 suse -
🎬 preview☁️ capm3🚀 rke2🐧 ubuntu -
☁️ capd🚀 kadm🛠️ light-deploy🐧 ubuntu -
☁️ capd🚀 rke2🛠️ light-deploy🐧 suse -
☁️ capo🚀 rke2🐧 suse -
☁️ capo🚀 kadm🐧 ubuntu -
☁️ capo🚀 rke2🎬 rolling-update🛠️ ha🐧 ubuntu -
☁️ capo🚀 kadm🎬 wkld-k8s-upgrade🐧 ubuntu -
☁️ capo🚀 rke2🎬 rolling-update-no-wkld🛠️ ha,misc🐧 suse -
☁️ capo🚀 rke2🎬 sylva-upgrade-from-1.3.x🛠️ ha,misc🐧 ubuntu -
☁️ capm3🚀 rke2🐧 suse -
☁️ capm3🚀 kadm🐧 ubuntu -
☁️ capm3🚀 kadm🎬 rolling-update-no-wkld🛠️ ha,misc🐧 ubuntu -
☁️ capm3🚀 rke2🎬 wkld-k8s-upgrade🛠️ ha🐧 suse -
☁️ capm3🚀 kadm🎬 rolling-update🛠️ ha🐧 ubuntu -
☁️ capm3🚀 rke2🎬 sylva-upgrade-from-1.3.x🛠️ misc,ha🐧 suse -
☁️ capm3🚀 kadm🎬 rolling-update🛠️ ha🐧 suse
Global config for deployment pipelines
-
autorun pipelines -
allow failure on pipelines -
record sylvactl events
Notes:
- Enabling
autorunwill make deployment pipelines to be run automatically without human interaction - Disabling
allow failurewill make deployment pipelines mandatory for pipeline success. - if both
autorunandallow failureare disabled, deployment pipelines will need manual triggering but will be blocking the pipeline
Be aware: after configuration change, pipeline is not triggered automatically.
Please run it manually (by clicking the run pipeline button in Pipelines tab) or push new code.