Use custom image for metallb controller

What does this MR do and why?

Use custom image for metallb controller in order to also serve services that miss the loadBalancerClass parameter when metallb has it set.

This fixes the need to recreate all services for workloads that we do not control when adopting loadBalancerClass.

Fixed code:

root@vbmh:metallb# git diff internal/k8s/controllers/service_controller.go
diff --git a/internal/k8s/controllers/service_controller.go b/internal/k8s/controllers/service_controller.go
index a804ee76..58fed03c 100644
--- a/internal/k8s/controllers/service_controller.go
+++ b/internal/k8s/controllers/service_controller.go
@@ -164,7 +164,7 @@ func filterByLoadBalancerClass(service *v1.Service, loadBalancerClass string) bo
                return false
        }
        if service.Spec.LoadBalancerClass == nil && loadBalancerClass != "" {
-               return true
+               return false
        }
        if service.Spec.LoadBalancerClass == nil && loadBalancerClass == "" {
                return false

Building the new image:

root@vbmh:metallb# docker build --file controller/Dockerfile . registry.gitlab.com/sylva-projects/sylva-elements/container-images/sandbox-registry/metallb-controller:v0.15.2-sylva-custom
root@vbmh:metallb# docker login registry.gitlab.com
root@vbmh:metallb# docker push registry.gitlab.com/sylva-projects/sylva-elements/container-images/sandbox-registry/metallb-controller:v0.15.2-sylva-custom

Closes #2671 (closed)

Test coverage

Tested locally with a clone service of cluster-vip without loadBalancerClass set:

root@jump:sylva-core# kubectl -n metallb-system get po metallb-controller-6cdc7c6589-m5vv4 -o yaml |yq .spec.containers[0].image
registry.gitlab.com/sylva-projects/sylva-elements/container-images/sandbox-registry/metallb-controller:v0.15.2-sylva-custom

root@jump:sylva-core# kubectl -n kube-system get svc cluster-vip -o yaml |yq '. | (.spec.loadBalancerClass, .status)'
sylva.org/metallb-class
loadBalancer:
  ingress:
    - ip: 192.168.18.120
      ipMode: VIP

root@jump:sylva-core# kubectl -n kube-system get svc cluster-vip-2 -o yaml |yq '. | (.spec.loadBalancerClass, .status)'
null
loadBalancer:
  ingress:
    - ip: 192.168.18.120
      ipMode: VIP

CI configuration

Below you can choose test deployment variants to run in this MR's CI.

Click to open to CI configuration

Legend:

Icon Meaning Available values
☁️ Infra Provider capd, capo, capm3
🚀 Bootstrap Provider kubeadm (alias kadm), rke2
🐧 Node OS ubuntu, suse
🛠️ Deployment Options light-deploy, dev-sources, ha, misc, maxsurge-0, logging, no-logging
🎬 Pipeline Scenarios Available scenario list and description
  • 🎬 preview ☁️ capd 🚀 kadm 🐧 ubuntu

  • 🎬 preview ☁️ capo 🚀 rke2 🐧 suse

  • 🎬 preview ☁️ capm3 🚀 rke2 🐧 ubuntu

  • ☁️ capd 🚀 kadm 🛠️ light-deploy 🐧 ubuntu

  • ☁️ capd 🚀 rke2 🛠️ light-deploy 🐧 suse

  • ☁️ capo 🚀 rke2 🐧 suse

  • ☁️ capo 🚀 kadm 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 rolling-update 🛠️ ha 🐧 ubuntu

  • ☁️ capo 🚀 kadm 🎬 wkld-k8s-upgrade 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 rolling-update-no-wkld 🛠️ ha 🐧 suse

  • ☁️ capo 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🐧 suse

  • ☁️ capm3 🚀 kadm 🐧 ubuntu

  • ☁️ capm3 🚀 kadm 🎬 rolling-update-no-wkld 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🎬 wkld-k8s-upgrade 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 kadm 🎬 rolling-update 🛠️ ha 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 rke2 🛠️ misc,ha 🐧 suse

  • ☁️ capm3 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha,misc 🐧 suse

  • ☁️ capm3 🚀 kadm 🎬 rolling-update 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 ck8s 🎬 no-wkld 🛠️ light-deploy 🐧 ubuntu

Global config for deployment pipelines

  • autorun pipelines
  • allow failure on pipelines
  • record sylvactl events

Notes:

  • Enabling autorun will make deployment pipelines to be run automatically without human interaction
  • Disabling allow failure will make deployment pipelines mandatory for pipeline success.
  • if both autorun and allow failure are disabled, deployment pipelines will need manual triggering but will be blocking the pipeline

Be aware: after configuration change, pipeline is not triggered automatically. Please run it manually (by clicking the run pipeline button in Pipelines tab) or push new code.

Edited by Cristian Manda

Merge request reports

Loading