NeuVector enforcer pods default CPU limit is not sufficient in protect mode
As of Sylva 1.3.10, NeuVector pods are deployed with the following CPU and memory resource requests and limits:
limits:
cpu: "400m"
memory: 2792Mi
requests:
cpu: "100m"
memory: 2280Mi
According to NeuVector documentation (https://open-docs.neuvector.com/basics/requirements#performance-and-scaling), the enforcer pods need more CPU resources in protect mode and recommend to allocate 1 CPU:
In Monitor mode (network filtering similar to a mirror/tap), there is no performance impact and the Enforcer handles traffic at line speed, generating alerts as needed. In Protect mode (inline firewall), the Enforcer requires CPU and memory to filter connections with deep packet inspection and hold them to determine whether they should be blocked/dropped. Generally, with 1GB of memory and a shared CPU, the Enforcer should be able to handle most environments while in Protect mode.
...
Latency is another performance metric which depends on the type of network connections. Similar to throughput, latency is not affected in Monitor mode, only for services in Protect (inline firewall) mode. Small packets or simple/fast services will generate a higher latency by NeuVector as a percentage, while larger packets or services requiring complex processing will show a lower percentage of added latency by the NeuVector enforcer.
According to our testings on a VM management cluster with 8 vCPUs per worker node:
- a CPU limit of 2 CPUs is a good compromise: 1 CPU is not enough, 4 CPUs does not improve the results
- increasing the CPU request does not improve the results
Changing the CPU limit can be done at deployment time with the following config in the management cluster values.yaml file:
units:
neuvector:
enabled: true
helmrelease_spec:
values:
resources:
limits:
cpu: "2"
memory: 2792Mi
requests:
cpu: 100m
memory: 2280Mi
But it would be good to have sensible default values.
Remark: that setting impacts all NeuVector pods: the enforcers, but also the controller and the manager.