[CAPM3] - On bootstrap-cluster rke2-ingress-nginx can be subject of OOM

The following behavior was observed on a bootstrap cluster:

kube-system rke2-ingress-nginx-controller-trrhp 1/1 Running 95 (10m ago) 18h 

It seems that the rke2-ingress-nginx pod is subject to a restart caused by an OOM:

    lastState:                                                                                                                                                                                                     
      terminated:                                                                                                                                                                                                  
        containerID: containerd://438b1e475c5b2555f2baeba615baffffcbb224e90de4fcdec02d4596c7f18a3e                                                                                                                 
        exitCode: 137                                                                                                                                                                                              
        finishedAt: "2025-05-06T12:14:41Z"                                                                                                                                                                         
        reason: OOMKilled           <<<<<<                                                                                                                                                                                
        startedAt: "2025-05-06T12:02:17Z" 
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722764 (nginx) total-vm:393776kB, anon-rss:11772kB, file-rss:6892kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722766 (nginx) total-vm:393776kB, anon-rss:11772kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722768 (nginx) total-vm:393776kB, anon-rss:11764kB, file-rss:6896kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722770 (nginx) total-vm:393776kB, anon-rss:11764kB, file-rss:6892kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722780 (nginx) total-vm:393776kB, anon-rss:11772kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 722897 (nginx) total-vm:393776kB, anon-rss:11772kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 723084 (nginx) total-vm:393776kB, anon-rss:11764kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 723152 (nginx) total-vm:393776kB, anon-rss:11768kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: OOM victim 724548 (nginx) is already exiting. Skip killing the task
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: OOM victim 725030 (nginx) is already exiting. Skip killing the task
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: OOM victim 725242 (nginx) is already exiting. Skip killing the task
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: Killed process 725410 (nginx) total-vm:393776kB, anon-rss:11760kB, file-rss:6900kB, shmem-rss:120kB, UID:101 pgtables:240kB oom_score_adj:999
May 06 08:47:04 sylva systemd[1]: docker-05ad00599c0fb28528ee0e6ef2702b409189e95eadd3dbc8ac4ebf6f4aaba4cd.scope: A process of this unit has been killed by the OOM killer.
May 06 08:47:04 sylva kernel: Memory cgroup out of memory: OOM victim 725624 (nginx) is already exiting. Skip killing the task
May 06 08:59:06 sylva kernel: nginx invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=999
May 06 08:59:06 sylva kernel: CPU: 80 PID: 730364 Comm: nginx Not tainted 5.15.0-138-generic #148-Ubuntu
May 06 08:59:06 sylva kernel: Hardware name: Dell Inc. PowerEdge R640/0H28RR, BIOS 2.18.1 02/22/2023
May 06 08:59:06 sylva kernel: Call Trace:
May 06 08:59:06 sylva kernel:  <TASK>
May 06 08:59:06 sylva kernel:  show_stack+0x52/0x5c
May 06 08:59:06 sylva kernel:  dump_stack_lvl+0x4a/0x63
May 06 08:59:06 sylva kernel:  dump_stack+0x10/0x16
May 06 08:59:06 sylva kernel:  dump_header+0x53/0x228
May 06 08:59:06 sylva kernel:  oom_kill_process.cold+0xb/0x10
May 06 08:59:06 sylva kernel:  out_of_memory+0x106/0x2e0
May 06 08:59:06 sylva kernel:  mem_cgroup_out_of_memory+0x13f/0x160
May 06 08:59:06 sylva kernel:  try_charge_memcg+0x687/0x740
May 06 08:59:06 sylva kernel:  charge_memcg+0x45/0xb0
May 06 08:59:06 sylva kernel:  __mem_cgroup_charge+0x2d/0x90
May 06 08:59:06 sylva kernel:  do_anonymous_page+0x114/0x3c0
May 06 08:59:06 sylva kernel:  handle_pte_fault+0x20a/0x240
May 06 08:59:06 sylva kernel:  __handle_mm_fault+0x405/0x6f0
May 06 08:59:06 sylva kernel:  handle_mm_fault+0xd8/0x2c0
May 06 08:59:06 sylva kernel:  do_user_addr_fault+0x1c9/0x640
May 06 08:59:06 sylva kernel:  ? exit_to_user_mode_prepare+0x37/0xb0
May 06 08:59:06 sylva kernel:  exc_page_fault+0x77/0x170
May 06 08:59:06 sylva kernel:  asm_exc_page_fault+0x27/0x30
May 06 08:59:06 sylva kernel: RIP: 0033:0x7f8724c9e54f
May 06 08:59:06 sylva kernel: Code: 47 18 48 29 da 48 83 fa 1f 4a 8d 0c 10 0f 86 10 07 00 00 48 8d 34 18 48 89 57 08 48 83 cb 03 48 89 77 18 48 89 d7 48 83 cf 01 <48> 89 7e 08 48 89 11 48 89 58 08 e9 ff fe ff f>
May 06 08:59:06 sylva kernel: RSP: 002b:00007fffaf1a8b10 EFLAGS: 00010202
May 06 08:59:06 sylva kernel: RAX: 00007f870d1a8f18 RBX: 000000000000018b RCX: 00007f870d1c4000
May 06 08:59:06 sylva kernel: RDX: 000000000001af60 RSI: 00007f870d1a90a0 RDI: 000000000001af61
May 06 08:59:06 sylva kernel: RBP: 00007f8723f2e3f0 R08: 00005651768c4010 R09: 000000000001b0e8
May 06 08:59:06 sylva kernel: R10: 000000000001b0e8 R11: 0000000000000000 R12: 0000000000000000
May 06 08:59:06 sylva kernel: R13: 00007f8723f2e380 R14: 00007f8723f2e380 R15: 0000565176903650
May 06 08:59:06 sylva kernel:  </TASK>

As soon as the pod starts up, it seems to consume memory close to its limit (1G : https://gitlab.com/sylva-projects/sylva-core/-/blob/main/charts/sylva-units/values.yaml?ref_type=heads#L4158):

kubectl top pod rke2-ingress-nginx-controller-trrhp -n kube-system                                                                                                              
NAME                                  CPU(cores)   MEMORY(bytes)                                                                                                                                                   
rke2-ingress-nginx-controller-trrhp   12m          994Mi

This issue (https://github.com/kubernetes/ingress-nginx/issues/8166) is interesting one of the comments suggests customizing the worker_process parameter :

We explicitly configured config.worker-processes as 8, as some of our nodes would spawn an (for our purposes) overkill amount of processes due to their high CPU core counts.

After setting this parameter to 8 (instead of auto, which in my case set it to 96 (the bootstrap cluster is on a baremetal server)), the pod consumes drastically less memory :

kubectl top pods rke2-ingress-nginx-controller-dhgwh -n kube-system \
NAME                                  CPU(cores)   MEMORY(bytes)  \
rke2-ingress-nginx-controller-dhgwh   4m           104Mi \

We still need to investigate this behavior a little further. (Is there this behavior on the management-cluster? Is there a memory leak or something that causes the pod to exceed the limit, etc.?)

Edited May 12, 2025 by Remi Le Trocquer
Assignee Loading
Time tracking Loading