Exclude Neuvector from 'disallow-latest-and-main-tag' ValidatingAdmissionPolicy validation
What does this MR do and why?
This MR updates the disallow-latest-and-main-tag ValidatingAdmissionPolicy (VAP) to exclude specific Neuvector resources from validation. The original goal was to decouple the VAP definition from object-specific exclusions to maintain modularity and reusability ("tag-validating-policy.sylva.io: exclude" label keeped for for pod labels and future decouple). However, Neuvector's Helm deployment templates do not expose a way to add custom labels via values.yaml, and using postRenderers is insufficient—since it runs after Helm renders manifests.
Test coverage
Initial deploy-management-cluster:
kubectl get helmrelease neuvector -n sylva-system -o yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
annotations:
reconcile.fluxcd.io/requestedAt: 2025-05-13T05:11:20
sylvactl/reconcileCompletedAt.1.iazf89: 2025-05-13T05:11:36
sylvactl/reconcileStartedAt.1.iazf89: 2025-05-13T05:11:20
creationTimestamp: "2025-05-13T05:11:19Z"
finalizers:
- finalizers.fluxcd.io
generation: 1
labels:
app.kubernetes.io/instance: sylva-units
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: sylva-units
app.kubernetes.io/version: 0.0.0
helm.sh/chart: sylva-units-0.0.0-git-18d47c6a_1
kustomize.toolkit.fluxcd.io/name: neuvector
kustomize.toolkit.fluxcd.io/namespace: sylva-system
sylva-units.unit: neuvector
sylva-units/root-dependency-wait: ""
name: neuvector
namespace: sylva-system
resourceVersion: "83388"
uid: 7d2f6795-6cf2-44c2-8b8a-5a230a9ad6b6
spec:
chart:
spec:
chart: neuvector-core
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: unit-neuvector
valuesFiles: []
version: 2.8.3
dependsOn:
- name: root-dependency-2
interval: 20m
postRenderers:
- kustomize:
patches:
- patch: |-
kind: CronJob
apiVersion: batch/v1
metadata:
name: neuvector-updater-pod
namespace: neuvector
spec:
startingDeadlineSeconds: 21600
releaseName: neuvector
targetNamespace: neuvector
values:
cve:
scanner:
enabled: true
image:
env:
- name: https_proxy
value: ""
- name: no_proxy
value: ""
repository: neuvector/scanner
internal:
certificate:
secret: neuvector-internal
updater:
enabled: true
image:
repository: neuvector/updater
valuesFrom: []
status:
conditions:
- lastTransitionTime: "2025-05-13T05:11:35Z"
message: Helm install succeeded for release neuvector/neuvector.v1 with chart
neuvector-core@2.8.3
observedGeneration: 1
reason: InstallSucceeded
status: "True"
type: Ready
- lastTransitionTime: "2025-05-13T05:11:35Z"
message: Helm install succeeded for release neuvector/neuvector.v1 with chart
neuvector-core@2.8.3
observedGeneration: 1
reason: InstallSucceeded
status: "True"
type: Released
helmChart: sylva-system/sylva-system-neuvector
history:
- appVersion: 5.4.1
chartName: neuvector-core
chartVersion: 2.8.3
firstDeployed: "2025-05-13T05:11:21Z"
lastDeployed: "2025-05-13T05:11:21Z"
name: neuvector
namespace: neuvector
status: deployed
version: 1
lastAttemptedGeneration: 1
lastAttemptedReleaseAction: install
lastAttemptedRevision: 2.8.3
lastHandledReconcileAt: 2025-05-13T05:11:20
observedGeneration: 1
kubectl -n neuvector get deployment neuvector-scanner-pod -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: neuvector
meta.helm.sh/release-namespace: neuvector
creationTimestamp: "2025-05-13T05:11:23Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
chart: neuvector-core-2.8.3
helm.toolkit.fluxcd.io/name: neuvector
helm.toolkit.fluxcd.io/namespace: sylva-system
release: neuvector
name: neuvector-scanner-pod
namespace: neuvector
resourceVersion: "83204"
uid: 62389531-949b-4905-a27d-648a6cd2f5ae
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: neuvector-scanner-pod
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: neuvector-scanner-pod
spec:
containers:
- env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
image: docker.io/neuvector/scanner:latest
imagePullPolicy: Always
name: neuvector-scanner-pod
---
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2025-05-13T05:11:34Z"
lastUpdateTime: "2025-05-13T05:11:34Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2025-05-13T05:11:23Z"
lastUpdateTime: "2025-05-13T05:11:34Z"
message: ReplicaSet "neuvector-scanner-pod-7f64c5776f" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
kubectl -n neuvector get pods neuvector-scanner-pod-7f64c5776f-khpj5 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-05-13T05:11:23Z"
generateName: neuvector-scanner-pod-7f64c5776f-
labels:
app: neuvector-scanner-pod
pod-template-hash: 7f64c5776f
name: neuvector-scanner-pod-7f64c5776f-khpj5
namespace: neuvector
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: neuvector-scanner-pod-7f64c5776f
uid: 3834bf6e-3751-470f-97c6-ab823f12d2d9
spec:
containers:
- env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
image: docker.io/neuvector/scanner:latest
imagePullPolicy: Always
name: neuvector-scanner-pod
---
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-05-13T05:11:34Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-05-13T05:11:23Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-05-13T05:11:34Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-05-13T05:11:34Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-05-13T05:11:23Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://08bea45d84bbe5ca5bba965c20dd06228400d2208bbdbb5ca91508eea2c87d78
image: docker.io/neuvector/scanner:latest
imageID: docker.io/neuvector/scanner@sha256:7fb0e2f9febf17d33a153b3b371544c1ec32ac8dfabebc978d6e453a3e7c9747
lastState: {}
name: neuvector-scanner-pod
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-05-13T05:11:33Z"
hostIP: 192.168.100.22
hostIPs:
- ip: 192.168.100.22
phase: Running
podIP: 100.72.106.52
podIPs:
- ip: 100.72.106.52
qosClass: BestEffort
startTime: "2025-05-13T05:11:23Z"
After update-management-cluster:
kubectl get helmrelease neuvector -n sylva-system -o yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
annotations:
reconcile.fluxcd.io/requestedAt: 2025-05-13T06:57:03
sylva-units-helm-revision: "3"
sylvactl/readyMessage: Neuvector UI can be reached at https://neuvector.172.18.0.2.nip.io
(neuvector.172.18.0.2.nip.io must resolve to 192.168.100.2)
sylvactl/reconcileCompletedAt.1.iazf89: 2025-05-13T05:11:36
sylvactl/reconcileCompletedAt.2.cWzW69: 2025-05-13T06:57:25
sylvactl/reconcileStartedAt.1.iazf89: 2025-05-13T05:11:20
sylvactl/reconcileStartedAt.2.cWzW69: 2025-05-13T06:57:03
creationTimestamp: "2025-05-13T05:11:19Z"
finalizers:
- finalizers.fluxcd.io
generation: 2
labels:
app.kubernetes.io/instance: sylva-units
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: sylva-units
app.kubernetes.io/version: 0.0.0
helm.sh/chart: sylva-units-0.0.0-git-2c85cf16_4
kustomize.toolkit.fluxcd.io/name: neuvector
kustomize.toolkit.fluxcd.io/namespace: sylva-system
sylva-units.unit: neuvector
sylva-units/root-dependency-wait: ""
name: neuvector
namespace: sylva-system
resourceVersion: "679674"
uid: 7d2f6795-6cf2-44c2-8b8a-5a230a9ad6b6
spec:
chart:
spec:
chart: neuvector-core
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: unit-neuvector
valuesFiles: []
version: 2.8.3
dependsOn:
- name: root-dependency-3
driftDetection:
mode: enabled
interval: 20m
postRenderers:
- kustomize:
patches:
- patch: |-
- op: replace
path: /spec/startingDeadlineSeconds
value: 21600
- op: add
path: /metadata/labels/tag-validating-policy.sylva.io
value: excluded
- op: add
path: /spec/jobTemplate/spec/template/spec/containers/0/securityContext
value:
runAsNonRoot: true
runAsGroup: 10000
runAsUser: 10000
seccompProfile:
type: RuntimeDefault
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
target:
kind: CronJob
name: neuvector-updater-pod
- patch: |-
- op: add
path: /metadata/labels/tag-validating-policy.sylva.io
value: excluded
target:
kind: Deployment
name: neuvector-scanner-pod
releaseName: neuvector
targetNamespace: neuvector
upgrade:
crds: CreateReplace
values:
---
cve:
scanner:
enabled: true
image:
env:
- name: https_proxy
value: ""
- name: no_proxy
value: ""
repository: neuvector/scanner
internal:
certificate:
secret: neuvector-internal
podAnnotations:
kube-score/ignore: container-image-tag
podLabels:
tag-validating-policy.sylva.io: excluded
updater:
enabled: true
image:
repository: neuvector/updater
podAnnotations:
kube-score/ignore: container-image-tag
podLabels:
tag-validating-policy.sylva.io: excluded
---
status:
conditions:
- lastTransitionTime: "2025-05-13T06:57:24Z"
message: Helm upgrade succeeded for release neuvector/neuvector.v2 with chart
neuvector-core@2.8.3
observedGeneration: 2
reason: UpgradeSucceeded
status: "True"
type: Ready
- lastTransitionTime: "2025-05-13T06:57:24Z"
message: Helm upgrade succeeded for release neuvector/neuvector.v2 with chart
neuvector-core@2.8.3
observedGeneration: 2
reason: UpgradeSucceeded
status: "True"
type: Released
helmChart: sylva-system/sylva-system-neuvector
history:
- appVersion: 5.4.1
chartName: neuvector-core
chartVersion: 2.8.3
firstDeployed: "2025-05-13T05:11:21Z"
lastDeployed: "2025-05-13T06:57:04Z"
name: neuvector
namespace: neuvector
status: deployed
version: 2
- appVersion: 5.4.1
chartName: neuvector-core
chartVersion: 2.8.3
firstDeployed: "2025-05-13T05:11:21Z"
lastDeployed: "2025-05-13T05:11:21Z"
name: neuvector
namespace: neuvector
status: superseded
version: 1
lastAttemptedGeneration: 2
lastAttemptedReleaseAction: upgrade
lastAttemptedRevision: 2.8.3
lastHandledReconcileAt: 2025-05-13T06:57:03
observedGeneration: 2
storageNamespace: sylva-system
kubectl -n neuvector get deployment neuvector-scanner-pod -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: neuvector
meta.helm.sh/release-namespace: neuvector
creationTimestamp: "2025-05-13T05:11:23Z"
generation: 2
labels:
app.kubernetes.io/managed-by: Helm
chart: neuvector-core-2.8.3
helm.toolkit.fluxcd.io/name: neuvector
helm.toolkit.fluxcd.io/namespace: sylva-system
release: neuvector
tag-validating-policy.sylva.io: excluded
name: neuvector-scanner-pod
namespace: neuvector
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: neuvector-scanner-pod
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
kube-score/ignore: container-image-tag
creationTimestamp: null
labels:
app: neuvector-scanner-pod
tag-validating-policy.sylva.io: excluded
spec:
containers:
- env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
image: docker.io/neuvector/scanner:latest
imagePullPolicy: Always
name: neuvector-scanner-pod
---
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2025-05-13T06:48:54Z"
lastUpdateTime: "2025-05-13T06:48:54Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2025-05-13T05:11:23Z"
lastUpdateTime: "2025-05-13T06:57:22Z"
message: ReplicaSet "neuvector-scanner-pod-5dc98f778d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
kubectl -n neuvector get pods neuvector-scanner-pod-5dc98f778d-frhz6 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-05-13T06:57:07Z"
generateName: neuvector-scanner-pod-5dc98f778d-
labels:
app: neuvector-scanner-pod
pod-template-hash: 5dc98f778d
tag-validating-policy.sylva.io: excluded
name: neuvector-scanner-pod-5dc98f778d-frhz6
namespace: neuvector
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: neuvector-scanner-pod-5dc98f778d
uid: a53df82f-d161-409d-b0dc-04b86f7ced7e
spec:
containers:
- env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
image: docker.io/neuvector/scanner:latest
imagePullPolicy: Always
name: neuvector-scanner-pod
---
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-05-13T06:57:10Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-05-13T06:57:07Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-05-13T06:57:10Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-05-13T06:57:10Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-05-13T06:57:07Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://b26e5bc82ebb19d894e1eccc14c1050982b337dd1cc147d18eb490a9e8fd176f
image: docker.io/neuvector/scanner:latest
imageID: docker.io/neuvector/scanner@sha256:a426cb0d9780b7d4cc06e44d1ef494b4842a8c447b6eff3ebe447e1153be51ab
lastState: {}
name: neuvector-scanner-pod
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-05-13T06:57:09Z"
startTime: "2025-05-13T06:57:07Z"
CI configuration
Below you can choose test deployment variants to run in this MR's CI.
Click to open to CI configuration
Legend:
| Icon | Meaning | Available values |
|---|---|---|
| Infra Provider |
capd, capo, capm3
|
|
| Bootstrap Provider |
kubeadm (alias kadm), rke2
|
|
| Node OS |
ubuntu, suse
|
|
| Deployment Options |
light-deploy, dev-sources, ha, misc, maxsurge-0, logging
|
|
| Pipeline Scenarios | Available scenario list and description |
-
🎬 preview☁️ capd🚀 kadm🐧 ubuntu -
🎬 preview☁️ capo🚀 rke2🐧 suse -
🎬 preview☁️ capm3🚀 rke2🐧 ubuntu -
☁️ capd🚀 kadm🛠️ light-deploy🐧 ubuntu -
☁️ capd🚀 rke2🛠️ light-deploy🐧 suse -
☁️ capo🚀 rke2🐧 suse -
☁️ capo🚀 kadm🐧 ubuntu -
☁️ capo🚀 rke2🎬 rolling-update🛠️ ha🐧 ubuntu -
☁️ capo🚀 kadm🎬 wkld-k8s-upgrade🐧 ubuntu -
☁️ capo🚀 rke2🎬 rolling-update-no-wkld🛠️ ha,misc🐧 suse -
☁️ capo🚀 rke2🎬 sylva-upgrade🛠️ ha,misc🐧 ubuntu -
☁️ capm3🚀 rke2🐧 suse -
☁️ capm3🚀 kadm🐧 ubuntu -
☁️ capm3🚀 kadm🎬 rolling-update-no-wkld🛠️ ha,misc🐧 ubuntu -
☁️ capm3🚀 rke2🎬 wkld-k8s-upgrade🛠️ ha🐧 suse -
☁️ capm3🚀 kadm🎬 rolling-update🛠️ ha🐧 ubuntu -
☁️ capm3🚀 rke2🎬 sylva-upgrade🛠️ misc,ha🐧 suse -
☁️ capm3🚀 kadm🎬 rolling-update🛠️ ha🐧 suse
Global config for deployment pipelines
-
autorun pipelines -
allow failure on pipelines -
record sylvactl events
Notes:
- Enabling
autorunwill make deployment pipelines to be run automatically without human interaction - Disabling
allow failurewill make deployment pipelines mandatory for pipeline success. - if both
autorunandallow failureare disabled, deployment pipelines will need manual triggering but will be blocking the pipeline
Be aware: after configuration change, pipeline is not triggered automatically.
Please run it manually (by clicking the run pipeline button in Pipelines tab) or push new code.