Skip to content

Mars: Introduce virtualization support

Brian Kocoloski requested to merge mariner into v1-staging

This MR updates mars to properly create virtual machines when requested by a materialization.

The LOC count of this MR is relatively large, but overall there isn't much complicated here.

The main noteworthy points:

  • plumbing of the new Tap device waypoints in the Mars apiserver. Each waypoint object is placed in a unique etcd key, which drives mariner on the hypervisor to create Tap devices for virtual links
  • a few updates to how DNS names are created for an mtz. These are needed to allow the hypervisor to be on both the mars management network (10.0.0.0/16) and the harbor infranet (172.29.0.0/16), without unintended naming collisions on things like etcd/minio
    • each mtz has the following DNS entries:
      • podetcd.<mzid>: etcd created for the mtz. This is primarily used by foundry, but could have other future uses
      • foundry.<mzid>: foundry server for the mtz
      • sled.system.marstb: sled server, shared by all mtzs
      • images.system.marstb: minio server serving OS images for sled and mariner, shared by all mtzs
        • this used to be minio.system.marstb
    • additionally, the harbor mtz also has:
      • etcd.system.marstb: mars etcd
      • minio.system.marstb: mars minio
  • some canopy updates to support the virtualization API update, which changed a number of object structures
    • one noteworthy update here is the ability to add VLAN subinterfaces to a bridge, which is how links are stitched through to mariner
  • update to the mars-install service to allow it to perform a hypervisor installation. This operates the same way it does on infraserver nodes: monitor etcd for /config changes, and create canopy, frr, and mariner containers based on the config
  • update to mariner to have it directly boot the EFI OS image variant rather than expecting a pre-configured qcow2 for the VM, as it did previously. The reasons for this change are:
    • prevents the need to pre-build a qcow2 image for each OS distro/version
    • allows us to boot VMs using qemu's -initrd, -kernel, and -append options, which means we can plumb infravid and inframac into the guest VM's foundry in the exact same way that sled does for bare metal images

The rest is various big fixes encountered during the development process

Edited by Brian Kocoloski

Merge request reports

Loading