Skip to content

NVMe differences between QEMU v4.1.0 and v8.2.1

Host environment

  • Operating system: Amazon Linux 2
  • OS/kernel version: 5.10.209-175.812
  • Architecture: x86
  • QEMU flavor: qemu-system-x86_64
  • QEMU version: v8.2.1
  • QEMU command line: N/A

Emulated/Virtualized environment

  • Operating system: Linux
  • OS/kernel version: 5.13
  • Architecture: x86

Description of problem

We are currently upgrading QEMU from v4.1.0 to v8.2.1. In order to keep compatibility between the two QEMUs, we are adding -machine pc-q35-4.1. One of our test is to ensure a guest that has hibernated on the previous QEMU is able to resume on the new one.

When resuming, we get the following error:

[    7.394709] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[    7.926188] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[    7.938235] Read-error on swap-device (259:0:4874880)
[    7.938237] Read-error on swap-device (259:0:4620184)
[    7.938240] Read-error on swap-device (259:0:5536464)
[    7.938311] Read-error on swap-device (259:0:5006840)
[    7.938316] Read-error on swap-device (259:0:5791888)
[    7.938386] Read-error on swap-device (259:0:6579728)
[    7.938391] Read-error on swap-device (259:0:5536680)
[    7.938431] Read-error on swap-device (259:0:4877384)
[    7.938434] Read-error on swap-device (259:0:5005376)
[    7.938457] Read-error on swap-device (259:0:5269328)
[    7.939200] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    7.939267] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    7.946359] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.063186] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.069556] Aborting journal on device nvme0n1p1-8.
[    8.069561] Buffer I/O error on dev nvme0n1p1, logical block 262144, lost sync page write
[    8.069564] JBD2: Error -5 detected when updating journal superblock for nvme0n1p1-8.
[    8.081218] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.081242] Buffer I/O error on dev nvme0n1p1, logical block 0, lost sync page write
[    8.081247] EXT4-fs (nvme0n1p1): I/O error while writing superblock
[    8.147693] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.147753] Buffer I/O error on dev nvme0n1p1, logical block 0, lost sync page write
[    8.163478] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.174179] EXT4-fs (nvme0n1p1): I/O error while writing superblock
[    8.198741] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:2: reading directory lblock 0
[    8.214483] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
[    8.230322] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:2: reading directory lblock 0
[    8.246249] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    8.246269] Core dump to |/usr/share/apport/apport pipe failed
[    8.246291] Core dump to |/usr/share/apport/apport pipe failed
[    8.246336] Core dump to |/usr/share/apport/apport pipe failed
[    8.246826] Core dump to |/usr/share/apport/apport pipe failed
[    8.249232] Core dump to |/usr/share/apport/apport pipe failed
[    8.249320] Core dump to |/usr/share/apport/apport pipe failed
[    8.249880] Core dump to |/usr/share/apport/apport pipe failed

Digging throw the NVMe code, I have found one patch changing the BAR layout. It doesn't look like there is a way to select the previous BAR layout.

When selecting the -machine, I was expecting that the underlying HW (including devices) would not change. Can you clarify if hibernating from QEMU A and resuming to QEMU B is meant to be supported?

Steps to reproduce

  1. Start the guest with qemu v4.1.0 and an NVME disk
  2. Hibernate the OS
  3. Resume the guest with qemu v8.2.1

Additional information

Edited by Peter Maydell
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information