Skip to content

Add MacOS virtualizationframework support

Arran Walker requested to merge ajwalker/virtualizationframework into main

This introduces the virtualizationframework hypervisor, using the MacOS Virtualization.Framework Go bindings provided by https://github.com/Code-Hex/vz.

The user networking stack is provided by https://github.com/Code-Hex/gvisor-vmnet, a library that uses the network stack provided by https://gvisor.dev/gvisor.

What works

  • Creating a VM
  • Deleting a VM
  • Listing VMs
  • Assigning network IP
  • VM network isolation

Migration from Tart

The existing Tart VM config and VM data can be loaded to ease migration. This means that Tart can continue to be used for tooling (use with Packer etc).

nesting using Tart's clone subcommand to clone VMs at the moment. On MacOS, this uses clonefile, where the image is cloned instantly and any modifications use the disk's copy-on-write functionality.

For the virtualizationframework, we're (at the moment) copying the disk entirely, which is more time consuming. However, it also introduces two options: image_directory where we expect to find "template VMs" and working_directory where we clone the template VMs to be used as part of a job. In our setup today on EC2 MacOS instances, we expect these paths to be on pointing at different disks, where clonefile isn't helpful anyway (as it won't be supported across devices), with the working_directory pointing to a volume on the internal SSD.

In addition to this, the disk.img and nvram.bin files can be archived with zstd (tar -acf archive.tar.zst disk.img nvram.bin) and if archive.tar.zst is found in the image_directory, the archive is uncompressed during the clone operation. This is to speed up cloning an image from a slow disk to a fast disk.

How to test

  1. On an EC2 instance, unmount and re-mount the internal SSD, so it's given a known name. In production, we'd want to format this and give it our own name.
sudo diskutil unmount force /dev/disk3s4
sudo diskutil mount /dev/disk3s4

This will be mounted as /Volumes/Update.

  1. (Optional step) Compress a VM template image:
cd /Users/ec2-user/.tart/vms/an-image-we-have
tar -acf archive.tar.zst disk.img nvram.bin
rm disk.img
rm nvram.bin

Nesting will use the archived image to speed up cloning to the destination disk.

  1. virtualizationframework currently uses Tart VM configs and layout.

Create a nesting.json file, with the contents and point the image directory to your existing tart VM directory:

{
  "image_directory": "/Users/ec2-user/.tart/vms",
  "working_directory": "/Volumes/Update"
}
  1. Build, codesign and serve with nesting:
$ cd cmd/nesting  
$ go build
$ codesign --entitlements vz.entitlements -s - ./nesting
$ ./nesting serve --hypervisor virtualizationframework --config nesting.json 
  1. In another terminal, init and then create a VM:
$ ./nesting init     
$ time ./nesting create testing 0                
3vjux4pk testing 127.0.0.1:12345
./nesting create testing 0  0.01s user 0.01s system 0% cpu 43.881 total

Closes #3 (closed)

Edited by Arran Walker

Merge request reports