Tigera operator is not implementing changes done inside installations.operator.tigera.io on IPPool

Summary

Tigera operator is not implementing changes done inside installations.operator.tigera.io on IPPool when running apply.sh on a cluster (same for workload cluster). Reproduced in Sylva main and 1.4.11 both having rke2-chart v3.30.300 version. Appling the configuration when initially deploying the clusters works.

Steps to reproduce

  1. Change an IPPool parameter in values.yaml:
calico_helm_values:
  installation:
    calicoNetwork:
      ipPools:
        - cidr: '100.72.0.0/16'
          encapsulation: VXLAN
          natOutgoing: Enabled
          disableBGPExport: true  # bogus parameter to test IPPool changes
  1. Run apply.sh

  2. apply.sh is successful and Installation is updated:

$ k get installations.operator.tigera.io default -o yaml | yq .spec.calicoNetwork.ipPools
- allowedUses:
    - Workload
    - Tunnel
  assignmentMode: Automatic
  blockSize: 26
  cidr: 100.72.0.0/16
  disableBGPExport: true
  disableNewAllocations: false
  encapsulation: VXLAN
  name: default-ipv4-ippool
  natOutgoing: Enabled
  nodeSelector: all()

What is the current bug behavior?

IPPool is not updated and error logs are visible in the tigera-operator pod complaining that Calico API server is unavailable, but Calico API server is actually disabled (from my understanding needed for managing Calico CRs with kubectl in the context of crd.projectcalico.org/v1 vs projectcalico.org/v3 resources - https://github.com/projectcalico/calico/issues/6412):

$ k get ippool default-ipv4-ippool -o yaml | yq .spec
allowedUses:
  - Workload
  - Tunnel
assignmentMode: Automatic
blockSize: 26
cidr: 100.72.0.0/16
ipipMode: Never
natOutgoing: true
nodeSelector: all()
vxlanMode: Always
k -n tigera-operator logs tigera-operator-7f8cb44cf9-52cvc | grep error | tail -1 | jq .
{
  "level": "error",
  "ts": "2025-12-10T07:43:00Z",
  "logger": "controller_ippool",
  "msg": "Unable to modify IP pools while Calico API server is unavailable",
  "Request.Namespace": "",
  "Request.Name": "periodic-5m0s-reconcile-event",
  "reason": "ResourceNotReady",
  "stacktrace": "github.com/tigera/operator/pkg/controller/status.(*statusManager).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/status/status.go:356\ngithub.com/tigera/operator/pkg/controller/ippool.(*Reconciler).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/ippool/pool_controller.go:325\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:328\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:288\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:249"
}

What is the expected correct behavior?

Tigera operator to be able to update IPPools.

From Tigera operator ippool controller code, my understanding is that updating the IPPools is only possible if Calico API Server is enabled. By default, it is disabled in RKE2 calico chart default values : https://github.com/rancher/rke2-charts/blob/main/charts/rke2-calico/rke2-calico/v3.30.300/values.yaml#L36, being disabled as part of PR.

So the initial work for this issue would be to see what settings are needed to be able to enable Calico API Server on a cluster besides:

calico_helm_values:
  apiServer:
    enabled: true
Edited Dec 17, 2025 by Vlad Onutu
Assignee Loading
Time tracking Loading