VM lock up under IO load when using IO throttling
Summary
I am observing reliable VM lock up (no further storage IO possible) when issuing a random write workload from within the VM when IO throttling is configured for the VM. Once in this state, the VM never recovers, and has to be hard shut off.
This is independent of
- The hypervisor storage backend for the VM, it occurs with both raw image files on an XFS file system, and an LVM block device
- The type of IO throttling, it occurs with both IOPS and BPS
- The hypervisor kernel, it occurs with both the EL9 5.14 kernel and the 6.12 Oracle UEK kernel (the hypervisor is running Oracle Linux 9)
- The QEMU version, it occurs with both the distribution
qemu-kvm-9.1.0-15.el9_6.9.x86_64
package, andqemu-kvm-10.0.0-12.el9.x86_64
(taken from Centos Stream 10, and rebuilt on OL9)
Without IO throttling configured, the VMs work fine, and can sustain any IO workload, with much higher IOPS/BPS than enforced by the throttling.
When VM lockup occurs, nothing remarkable is observed on the hypervisor. There are no storage issues, and no processes stuck in D
state.
I am mainly looking for information on what further data can be collected to narrow this down (from either the VM, or the hypervisor side)
Hypervisor setup
The hypervisors are HPE ProLiant DL385 Gen11, with two AMD EPYC 9474F processors, and 1.5TB of RAM. They are running OL9, using the UEK kernels. VMs are managed through libvirt, vCPUs are pinned to physical hypervisor cores, IO threads are pinned to a dedicated pool of hypervisor cores. 1G hugemem pages are in use to back VM memory.
The issue occurs even with a single VM running on the hypervisor.
VM test setup
The VMs are also running OL9, using the 5.14 kernel. The IO workload is generated using fio
.
fio '--filename_format=/mnt/fio-test.job-$jobnum.file-$filenum' --filesize=100GB --size=100GB --direct=1 --rw=randwrite --bs=4k --ioengine=libaio --iodepth=16 --runtime=120 --numjobs=8 --time_based --group_reporting --name=iops-test-job-engine-libaio-jobs-8-iodepth-16-alloc-fallocate --eta-newline=1 --output=iops-test-job-engine-libaio-jobs-8-iodepth-16-alloc-fallocate.json --output-format=json+ --log_avg_msec=50 --write_lat_log=iops-test-job-engine-libaio-jobs-8-iodepth-16-alloc-fallocate --write_iops_log=iops-test-job-engine-libaio-jobs-8-iodepth-16-alloc-fallocate --write_bw_log=iops-test-job-engine-libaio-jobs-8-iodepth-16-alloc-fallocate
The time for the issue to occur varies, but is never more than a few minutes. In some occasions, it occurs a few seconds after the test starts. In the following example, the VM was configured for a bandwidth limit of 250MB/s:
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][2.5%][w=238MiB/s][w=61.0k IOPS][eta 01m:57s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][4.2%][w=239MiB/s][w=61.1k IOPS][eta 01m:55s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][5.8%][w=239MiB/s][w=61.1k IOPS][eta 01m:53s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][7.5%][w=238MiB/s][w=61.1k IOPS][eta 01m:51s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][9.2%][w=238MiB/s][w=61.0k IOPS][eta 01m:49s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][10.8%][w=230MiB/s][w=58.8k IOPS][eta 01m:47s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][12.5%][eta 01m:45s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][14.2%][eta 01m:43s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][15.8%][eta 01m:41s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][17.5%][eta 01m:39s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][19.2%][eta 01m:37s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][20.8%][eta 01m:35s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][22.5%][eta 01m:33s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][24.2%][eta 01m:31s]
Jobs: 8 (f=8), 0-100000 IOPS: [w(8)][25.8%][eta 01m:29s]
In the first 6 lines, fio
was able to issue writes, and the VM storage locked up after this. It never recovers.
The issue only occurs with random IO. Linear write worloads, even if they run into the bandwidth or IO limit, seem unaffected.
When the issue occurs, the VM continues to work, but no further disk IO (read or write) is possible. It is possible to inspect the VM state as long as no disk access is required.
VM qemu command line
/usr/libexec/qemu-kvm
-name
guest=iops-test-1,debug-threads=on
-S
-object
{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-10--iops-test/master-key.aes"}
-blockdev
{"driver":"file","filename":"/usr/share/edk2/ovmf/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}
-blockdev
{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}
-blockdev
{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/spare-bc20797e1a_VARS.fd","node-name":"libvirt-pflash1-storage","read-only":false}
-machine
pc-q35-rhel9.6.0,usb=off,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-storage,graphics=off,hpet=off,acpi=on
-accel
kvm
-cpu
host,migratable=on,topoext=on,kvm-pv-unhalt=on,kvm-poll-control=on,host-cache-info=on,l3-cache=off
-m
size=134217728k
-object
{"qom-type":"thread-context","id":"tc-pc.ram","node-affinity":[0]}
-object
{"qom-type":"memory-backend-file","id":"pc.ram","mem-path":"/dev/hugepages/libvirt/qemu/10--iops-test","x-use-canonical-path-for-ramblock-id":false,"prealloc":true,"size":137438953472,"host-nodes":[0],"policy":"bind","prealloc-context":"tc-pc.ram"}
-overcommit
mem-lock=off
-smp
16,sockets=1,dies=1,clusters=1,cores=8,threads=2
-object
{"qom-type":"iothread","id":"iothread1"}
-object
{"qom-type":"iothread","id":"iothread2"}
-object
{"qom-type":"iothread","id":"iothread3"}
-object
{"qom-type":"iothread","id":"iothread4"}
-object
{"qom-type":"iothread","id":"iothread5"}
-object
{"qom-type":"iothread","id":"iothread6"}
-uuid
ff522777-1382-5edd-8476-82218a8985d0
-smbios
type=1
-no-user-config
-nodefaults
-chardev
socket,id=charmonitor,fd=129,server=on,wait=off
-mon
chardev=charmonitor,id=monitor,mode=control
-rtc
base=utc,driftfix=slew
-global
kvm-pit.lost_tick_policy=delay
-no-shutdown
-global
ICH9-LPC.disable_s3=1
-global
ICH9-LPC.disable_s4=1
-boot
menu=on,strict=on
-device
{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}
-device
{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}
-device
{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}
-device
{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}
-device
{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"}
-device
{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"}
-device
{"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x2.0x6"}
-device
{"driver":"pcie-root-port","port":23,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x2.0x7"}
-device
{"driver":"pcie-root-port","port":24,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x3"}
-device
{"driver":"pcie-root-port","port":25,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x3.0x1"}
-device
{"driver":"pcie-root-port","port":26,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x2"}
-device
{"driver":"pcie-root-port","port":27,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x3.0x3"}
-device
{"driver":"pcie-root-port","port":28,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x3.0x4"}
-device
{"driver":"pcie-root-port","port":29,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x3.0x5"}
-device
{"driver":"pcie-root-port","port":30,"chassis":15,"id":"pci.15","bus":"pcie.0","addr":"0x3.0x6"}
-device
{"driver":"pcie-pci-bridge","id":"pci.16","bus":"pci.1","addr":"0x0"}
-device
{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.3","addr":"0x0"}
-device
{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.4","addr":"0x0"}
-blockdev
{"driver":"file","filename":"/var/lib/libvirt/images/BC20797E1A.disk","aio":"native","node-name":"libvirt-1-storage","read-only":false,"discard":"ignore","cache":{"direct":true,"no-flush":false}}
-device
{"driver":"virtio-blk-pci","num-queues":6,"iothread-vq-mapping":[{"iothread":"iothread1"},{"iothread":"iothread2"},{"iothread":"iothread3"},{"iothread":"iothread4"},{"iothread":"iothread5"},{"iothread":"iothread6"}],"bus":"pci.5","addr":"0x0","drive":"libvirt-1-storage","id":"virtio-disk0","bootindex":1,"logical_block_size":4096,"physical_block_size":4096,"write-cache":"on"}
-netdev
{"type":"tap","fds":"132:140:141:142:143:144:146:147:148:149:150:151:152:153:154:155","vhost":true,"vhostfds":"156:157:158:159:160:161:162:163:164:165:166:167:168:169:170:171","id":"hostnet0"}
-device
{"driver":"virtio-net-pci","mq":true,"vectors":34,"netdev":"hostnet0","id":"net0","mac":"b2:bc:20:79:7e:1a","bootindex":2,"bus":"pci.2","addr":"0x0"}
-chardev
pty,id=charserial0
-device
{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}
-chardev
socket,id=charchannel0,fd=124,server=on,wait=off
-device
{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}
-chardev
socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/10--iops-test-swtpm.sock
-tpmdev
emulator,id=tpm-tpm0,chardev=chrtpm
-device
{"driver":"tpm-crb","tpmdev":"tpm-tpm0","id":"tpm0"}
-device
{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}
-audiodev
{"id":"audio1","driver":"none"}
-vnc
127.0.0.1:0,websocket=5700,audiodev=audio1
-device
{"driver":"virtio-vga","id":"video0","max_outputs":1,"bus":"pcie.0","addr":"0x1"}
-device
{"driver":"i6300esb","id":"watchdog0","bus":"pci.16","addr":"0x1"}
-global
ICH9-LPC.noreboot=off
-watchdog-action
reset
-device
{"driver":"virtio-balloon-pci","id":"balloon0","free-page-reporting":true,"bus":"pci.6","addr":"0x0"}
-object
{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}
-device
{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.7","addr":"0x0"}
-sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg
timestamp=on
VM libvirt XML
<domain type='kvm' id='10'>
<name>iops-test-1</name>
<uuid>ff522777-1382-5edd-8476-82218a8985d0</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://centos.org/centos-stream/9"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>134217728</memory>
<currentMemory unit='KiB'>134217728</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
<vcpu placement='static'>16</vcpu>
<iothreads>6</iothreads>
<cputune>
<vcpupin vcpu='0' cpuset='42'/>
<vcpupin vcpu='1' cpuset='138'/>
<vcpupin vcpu='2' cpuset='43'/>
<vcpupin vcpu='3' cpuset='139'/>
<vcpupin vcpu='4' cpuset='44'/>
<vcpupin vcpu='5' cpuset='140'/>
<vcpupin vcpu='6' cpuset='45'/>
<vcpupin vcpu='7' cpuset='141'/>
<vcpupin vcpu='8' cpuset='46'/>
<vcpupin vcpu='9' cpuset='142'/>
<vcpupin vcpu='10' cpuset='47'/>
<vcpupin vcpu='11' cpuset='143'/>
<vcpupin vcpu='12' cpuset='32'/>
<vcpupin vcpu='13' cpuset='128'/>
<vcpupin vcpu='14' cpuset='33'/>
<vcpupin vcpu='15' cpuset='129'/>
<emulatorpin cpuset='0-5,96-101'/>
<iothreadpin iothread='1' cpuset='0-5,96-101'/>
<iothreadpin iothread='2' cpuset='0-5,96-101'/>
<iothreadpin iothread='3' cpuset='0-5,96-101'/>
<iothreadpin iothread='4' cpuset='0-5,96-101'/>
<iothreadpin iothread='5' cpuset='0-5,96-101'/>
<iothreadpin iothread='6' cpuset='0-5,96-101'/>
</cputune>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
</system>
</sysinfo>
<os firmware='efi'>
<type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
<firmware>
<feature enabled='no' name='enrolled-keys'/>
<feature enabled='no' name='secure-boot'/>
</firmware>
<loader readonly='yes' type='pflash' format='raw'>/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
<nvram template='/usr/share/edk2/ovmf/OVMF_VARS.fd' templateFormat='raw' format='raw'>/var/lib/libvirt/qemu/nvram/spare-bc20797e1a_VARS.fd</nvram>
<boot dev='hd'/>
<boot dev='network'/>
<bootmenu enable='yes'/>
<bios useserial='yes'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<kvm>
<hidden state='off'/>
<poll-control state='on'/>
</kvm>
<pvspinlock state='on'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' clusters='1' cores='8' threads='2'/>
<cache mode='passthrough'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native' discard='ignore' queues='6'>
<iothreads>
<iothread id='1'/>
<iothread id='2'/>
<iothread id='3'/>
<iothread id='4'/>
<iothread id='5'/>
<iothread id='6'/>
</iothreads>
</driver>
<source file='/var/lib/libvirt/images/BC20797E1A.disk' index='1'/>
<backingStore/>
<blockio logical_block_size='4096' physical_block_size='4096'/>
<target dev='vda' bus='virtio'/>
<iotune>
<read_bytes_sec>250000000</read_bytes_sec>
<write_bytes_sec>250000000</write_bytes_sec>
</iotune>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<alias name='pci.7'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x17'/>
<alias name='pci.8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
</controller>
<controller type='pci' index='9' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='9' port='0x18'/>
<alias name='pci.9'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='10' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='10' port='0x19'/>
<alias name='pci.10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</controller>
<controller type='pci' index='11' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='11' port='0x1a'/>
<alias name='pci.11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
</controller>
<controller type='pci' index='12' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='12' port='0x1b'/>
<alias name='pci.12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
</controller>
<controller type='pci' index='13' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='13' port='0x1c'/>
<alias name='pci.13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
</controller>
<controller type='pci' index='14' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='14' port='0x1d'/>
<alias name='pci.14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
</controller>
<controller type='pci' index='15' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='15' port='0x1e'/>
<alias name='pci.15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
</controller>
<controller type='pci' index='16' model='pcie-to-pci-bridge'>
<model name='pcie-pci-bridge'/>
<alias name='pci.16'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='b2:bc:20:79:7e:1a'/>
<source network='default' portgroup='vlan-100' portid='ec35162a-d45c-4088-ab98-1f8e1a0844f9' bridge='bridge0'/>
<vlan>
<tag id='100'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='57cbd09a-2d61-4846-b831-5c036d579493'/>
</virtualport>
<target dev='vnet9'/>
<model type='virtio'/>
<driver name='vhost' queues='16'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/3'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/run/libvirt/qemu/channel/10--iops-test/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
<alias name='tpm0'/>
</tpm>
<graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='virtio' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>
<watchdog model='i6300esb' action='reset'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x10' slot='0x01' function='0x0'/>
</watchdog>
<watchdog model='itco' action='reset'>
<alias name='watchdog1'/>
</watchdog>
<memballoon model='virtio' freePageReporting='on'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
Visibility from the hypervisor
In some occasions, it is possible to execute qmp
commands against the stuck VM to gather information (through virsh
). In others, this is also not possible, and the command will time out waiting for a lock. It is always possible to kill the stuck VM with virsh destroy
, and restart it.