Inconsistent memory limits configuration in Kyverno admissionController Helm chart values

The Kyverno Helm chart has inconsistent behavior when configuring memory limits and requests for the admissionController. When setting memory limits in values.yaml, the deployed pods don't reflect the configured values.

When configuring resources in values.yaml:

admissionController:
  resources:
    limits:
      memory: 1024Mi

The deployed pods are showing different values:

{
  "limits": {
    "memory": "384Mi"
  },
  "requests": {
    "cpu": "100m",
    "memory": "128Mi"
  }
}

According to Kyverno README.md, the limits field must be set under admissionController.initContainer.resources or admissionController.container.resources level. We need to correct this behavior.

Edited Mar 18, 2025 by Dragos Gerea
Assignee Loading
Time tracking Loading