Uncleared file handles with default caching options causing "too many open files" virtiofs backed by ZFS dataset
Running on Devuan hypervisor:
- virsh version 7.0.0
- qemu 5.2.0
- hypervisor: Xeon Scalable CPUs 40, 384GB RAM
- ZFS version zfs-2.0.3-9+deb11u1
- storage using ZFS data sets shared via virtiofs
When running a small number of VMs (in this example, 6 VMs with Devuan as guest OS), file operations on the guest leave a high number of file descriptors open on the hypervisor:
# lsof | awk '{ print $1; $2; }' | uniq -c | sort -rn | head
191745 virtiofsd
53925 virtiofsd
44315 virtiofsd
5375 virtiofsd
4704 libvirtd
3984 virtiofsd
3825 virtiofsd
The high file-descriptor counts occur when machines defined with the following options:
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs'/>
<source dir='/zpool1/vmdata/sync/data/'/>
<target dir='sync'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</filesystem>
and can be reduced by disabling caching:
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs' queue='1024'/>
<binary path='/usr/lib/qemu/virtiofsd' xattr='on'>
<cache mode='none'/>
<lock posix='on' flock='on'/>
</binary>
<source dir='/zpool1/vmdata/sync/data'/>
<target dir='sync'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</filesystem>