Bug 2173513 - When hypervisor run kernel >= 6.0 , KVM guest fails to access raw device mapped
Summary: When hypervisor run kernel >= 6.0 , KVM guest fails to access raw device mapped
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 37
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On: 2174139
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-27 07:19 UTC by Andrea Perotti
Modified: 2023-03-01 11:22 UTC (History)
20 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2023-03-01 11:22:05 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Logs from guest VM when hypervisor is running kernel 5.19 (strace mkfs.xfs + mount; sosreport) (10.66 MB, application/x-tar)
2023-02-27 07:19 UTC, Andrea Perotti
no flags Details
Logs from guest VM when hypervisor is running kernel 6.3 (strace mkfs.xfs + mount; sosreport) (9.75 MB, application/x-tar)
2023-02-27 07:20 UTC, Andrea Perotti
no flags Details
sos from hypervisor (15.92 MB, application/x-xz)
2023-02-27 07:21 UTC, Andrea Perotti
no flags Details
reproducer VM: libvirt definition (7.74 KB, text/html)
2023-02-27 07:22 UTC, Andrea Perotti
no flags Details
lsblk -OJ from guest when hypervisor is running 5.19 (17.42 KB, text/plain)
2023-02-27 14:21 UTC, Andrea Perotti
no flags Details
lsblk -OJ from guest when hypervisor is running 6.3 (17.31 KB, text/plain)
2023-02-27 14:21 UTC, Andrea Perotti
no flags Details

Description Andrea Perotti 2023-02-27 07:19:37 UTC
Created attachment 1946668 [details]
Logs from guest VM when hypervisor is running kernel 5.19 (strace mkfs.xfs + mount; sosreport)

1. Please describe the problem:

After having upgraded my Fedora 36 box to 6.x kernel, KVM VMs don't boot/install for problems with underlying raw storage.
Problem persist even in F37 and with any 6.x kernel, up to 6.2.0-63.fc38 (latest tested).


This is my setup:

Physical Host:
OS: Fedora 37
hostname: silence
kernel 6.2.0-63.fc38 / 5.19.14-200.fc36
Intel Xeon CPU E5-2683 v4 @ 2.10GHz
RAID controller: Broadcom / LSI MegaRAID SAS-3 3108
virtual disk sda=2 * HITACHI HUSMM808 CLAR800 - 800Gb - SAS 12Gbps - Logical/Physical block size: 4096/512 bytes
virtual disk sdb=2 * HGST HUS726040AL4210 - 4Tb - SAS 12Gbps - Logical/Physical block size: 4096/4096 bytes

Below my reproducer:

VM Name: rhel9.1-hdd
OS: RHEL 9.1
hostname: localhost
kernel: 5.14.0-162.12.1.el9_1.x86_64
sdb - 12G - QCow2 file - boot device
sda - 325,3G - raw device, map partition /dev/sdb2


All deviced are mapped as SCSI disks under a VirtIO SCSI controller.


What I'm experiencing is that with host kernel 5.19 all works perfectly, when hypervisor boot with any 6.x kernel -
this started with 6.0 and is still consistent with kernel 6.2.0 from Fedora38 - all raw devices are recognized in the VM as sda and sdb ,
but if I try to format them with XFS, I face errors, i.e.:

# strace -f -ttt -T -v -x -y -s 4096 -o host-6.3.0_mkfs.xfs-sda1-raw.strace mkfs.xfs -f -K /dev/sda1
meta-data=/dev/sda1              isize=512    agcount=4, agsize=21317311 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=85269243, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=41635, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
error reading existing superblock: Remote I/O error
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0x28a8d6de/0x100, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0x0/0x100, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on xfs_sb bno 0x0/0x1, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0x14546c20/0x2, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: read failed: Remote I/O error
mkfs.xfs: data size check failed
mkfs.xfs: filesystem failed to initialize

If I format and mount them in XFS in the VM when host run 5.19, then reboot host into 6.x and try to mount them again, I got errors at mount.

NOTE: On hypervisor, partition sda4 is used as VDO device, named kvm: most VMs disk are there. **VDO is not the problem** reproducer demonstrate that issue
is present also when mapping raw partition, and I've conducted various tests without it, and the issue is consistent

If anyone can take a look at it and see something, or you have any further test to do, I'll be greatful: after upgrade to 6.0, this has been a big problem,
because I'm stuck with an old 5.19 kernel, with no further security or bug fixes.

2. What is the Version-Release number of the kernel:
   6.2.0-63.fc38
   6.3.0-0.rc0.20230223gita5c95ca18a98.4.fc39

3. Did it work previously in Fedora? If so, what kernel version did the issue
   *first* appear?  Old kernels are available for download at
   https://koji.fedoraproject.org/koji/packageinfo?packageID=8 :

   kernel-6.0.5-200.fc36.x86_64

4. Can you reproduce this issue? If so, please provide the steps to reproduce
   the issue below:

   a) boot hypervisor system with kernel >= 6.0
   b) create a VM with storage mapped to a block device from hypervisor
   c) try to format volume from inside the VM: it will fail
   <or>
   c) try to mount in the VM a volume already formatted when hypervisor was running 5.19: it will fail anyway

5. Does this problem occur with the latest Rawhide kernel? To install the
   Rawhide kernel, run ``sudo dnf install fedora-repos-rawhide`` followed by
   ``sudo dnf update --enablerepo=rawhide kernel``:

   Yes, is still present.
   Note: with kernel 6.3 kmod kvdo does not build, but the issue is still reproduceable when the raw device is backed by a partition rather than an lvm2 volume.
   On 6.2, vdo builds and both lvm2 volume and partition suffer the same issu,

6. Are you running any modules that not shipped with directly Fedora's kernel?:

   Most recent tests with 6.3 are just pure Fedora.
   On 6.2 I used to have kmod-kvdo from copr:copr.fedorainfracloud.org:rhawalsh:dm-vdo .

7. Please attach the kernel logs. You can get the complete kernel log
   for a boot with ``journalctl --no-hostname -k > dmesg.txt``. If the
   issue occurred on a previous boot, use the journalctl ``-b`` flag.

   Attached you can find:
   - host_kernel_5.19.14-200.tar - Logs from guest VM when hypervisor is running kernel 5.19 (strace mkfs.xfs + mount; sosreport)
   - host_kernel_6.3.0.tar - All logs collected via sos from guest VM when hypervisor was running kernel 6.3, and strace + output of mkfs.xfs and mount
   - sosreport-silence-2023-02-27-gkvbsfv.tar.xz - sos from hypervisor
   - rhel9.1-hdd.xml: VM Definition

Comment 1 Andrea Perotti 2023-02-27 07:20:32 UTC
Created attachment 1946669 [details]
Logs from guest VM when hypervisor is running kernel 6.3 (strace mkfs.xfs + mount; sosreport)

Comment 2 Andrea Perotti 2023-02-27 07:21:20 UTC
Created attachment 1946671 [details]
sos from hypervisor

Comment 3 Andrea Perotti 2023-02-27 07:22:26 UTC
Created attachment 1946672 [details]
reproducer VM: libvirt definition

Comment 4 Andrea Perotti 2023-02-27 14:21:16 UTC
Created attachment 1946729 [details]
lsblk -OJ from guest when hypervisor is running 5.19

Comment 5 Andrea Perotti 2023-02-27 14:21:51 UTC
Created attachment 1946730 [details]
lsblk -OJ from guest when hypervisor is running 6.3

Comment 6 Andrea Perotti 2023-02-27 14:26:52 UTC
diff lsblk -OJ output from guest, with different kernels running on hypervisor:


# diff host_kernel_5.19.14-200/host-5.19_lsblk-OJ host_kernel_6.3.0/host-6.3.0_lsblk-OJ 
73c73
<                "fstype": "xfs",
---
>                "fstype": null,
85c85
<                "uuid": "7a06eb98-8d3b-4bb6-897f-6c4234f1df18",
---
>                "uuid": null,
88,89c88,89
<                "parttype": "0fc63daf-8483-4772-8e79-3d69d8477de4",
<                "parttypename": "Linux filesystem",
---
>                "parttype": null,
>                "parttypename": null,
91c91
<                "partuuid": "69e2b2fe-2a04-8a43-b6cd-a0a922c9d560",
---
>                "partuuid": null

Comment 7 Stefan Hajnoczi 2023-02-28 17:12:58 UTC
I guess that the guest kernel SCSI code is returning BLK_STS_TARGET and the Linux block layer turns that into EREMOTEIO.

Nothing in the sosreport stood out.

Please try:

  (guest)# perf record -e scsi:\* mkfs.xfs -f -K /dev/sda1
  (guest)# perf script

This will print out the guest kernel SCSI trace events including errors. That will confirm whether QEMU's virtio-scsi device is returning errors to the guest or whether something inside the guest is returning errors.

Comment 8 Andrea Perotti 2023-03-01 09:22:28 UTC
# rpm -qa | egrep "qemu-kvm|seabios|ovmf|libvirt-daemon-8"
libvirt-daemon-8.6.0-5.fc37.x86_64
seabios-bin-1.16.1-2.fc37.noarch
qemu-kvm-core-7.0.0-13.fc37.x86_64
qemu-kvm-7.0.0-13.fc37.x86_64
edk2-ovmf-20221117gitfff6d81270b5-14.fc37.noarch

~~~

boot disk "/iso/rhel91-hdd.qcow2" is on qcow2 file that is saved on xfs  

[hypervisor]
virtual disk sdb=2 * HGST HUS726040AL4210 - 4Tb - SAS 12Gbps - Logical/Physical block size: 4096/4096 bytes

sdb                  8:16   0   3.6T  0 disk 
├─sdb1               8:17   0   3.3T  0 part /raid1
└─sdb2               8:18   0    12G  0 part 

# ll /iso
lrwxrwxrwx. 1 root root 11 Feb 26 11:35 /iso -> /raid1/iso/


The test disk on guest /dev/sda1 is a volume mapping partitioning /dev/sdb2 on the same disk.

both are mapped as SCSI disk through SCSI VirtIO controller

~~~

Guest VM run:

/usr/bin/qemu-system-x86_64 -name guest=rhel9.1-hdd,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-rhel9.1-hdd/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/rhel9.1-hdd_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine pc-q35-7.0,usb=off,vmport=off,smm=on,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=pc.ram -accel kvm -cpu host,migratable=on -global driver=cfi.pflash01,property=secure,value=on -m 2048 -object {"qom-type":"memory-backend-ram","id":"pc.ram","size":2147483648} -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid fbc0c0db-eb3b-4cb5-890e-88136beeb3cd -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=23,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot menu=on,strict=on -device {"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"} -device {"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"} -device {"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"} -device {"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"} -device {"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"} -device {"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"} -device {"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x2.0x6"} -device {"driver":"pcie-root-port","port":23,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x2.0x7"} -device {"driver":"pcie-root-port","port":24,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x3"} -device {"driver":"pcie-root-port","port":25,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x3.0x1"} -device {"driver":"pcie-root-port","port":26,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x2"} -device {"driver":"pcie-root-port","port":27,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x3.0x3"} -device {"driver":"pcie-root-port","port":28,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x3.0x4"} -device {"driver":"pcie-root-port","port":29,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x3.0x5"} -device {"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"} -device {"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.4","addr":"0x0"} -device {"driver":"ide-cd","bus":"ide.0","id":"sata0-0-0"} -blockdev {"driver":"file","filename":"/iso/rhel91-hdd.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","driver":"qcow2","file":"libvirt-2-storage","backing":null} -device {"driver":"scsi-hd","bus":"scsi0.0","channel":0,"scsi-id":0,"lun":1,"device_id":"drive-scsi0-0-0-1","drive":"libvirt-2-format","id":"scsi0-0-0-1","bootindex":1} -blockdev {"driver":"host_device","filename":"/dev/sdb2","aio":"native","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"discard":"unmap","cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"} -device {"driver":"scsi-hd","bus":"scsi0.0","channel":0,"scsi-id":0,"lun":3,"device_id":"drive-scsi0-0-0-3","drive":"libvirt-1-format","id":"scsi0-0-0-3","write-cache":"on"} -chardev pty,id=charserial0 -device {"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0} -chardev socket,id=charchannel0,fd=22,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"} -chardev spicevmc,id=charchannel1,name=vdagent -device {"driver":"virtserialport","bus":"virtio-serial0.0","nr":2,"chardev":"charchannel1","id":"channel1","name":"com.redhat.spice.0"} -device {"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"} -audiodev {"id":"audio1","driver":"spice"} -spice port=5900,addr=127.0.0.1,disable-ticketing=on,seamless-migration=on -device {"driver":"VGA","id":"video0","vgamem_mb":16,"bus":"pcie.0","addr":"0x1"} -chardev spicevmc,id=charredir0,name=usbredir -device {"driver":"usb-redir","chardev":"charredir0","id":"redir0","bus":"usb.0","port":"2"} -chardev spicevmc,id=charredir1,name=usbredir -device {"driver":"usb-redir","chardev":"charredir1","id":"redir1","bus":"usb.0","port":"3"} -device {"driver":"vfio-pci","host":"0000:02:10.0","id":"hostdev0","bus":"pci.1","addr":"0x0"} -device {"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.5","addr":"0x0"} -object {"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"} -device {"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.6","addr":"0x0"} -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on

~~~

[root@localhost ~]# perf record -e scsi:\* mkfs.xfs -f -K /dev/sda1
meta-data=/dev/sda1              isize=512    agcount=4, agsize=786367 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=3145467, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
error reading existing superblock: Remote I/O error
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0x17ff6de/0x100, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0x0/0x100, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on xfs_sb bno 0x0/0x1, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Remote I/O error
libxfs_bwrite: write failed on (unknown) bno 0xbffc20/0x2, err=121
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: read failed: Remote I/O error
mkfs.xfs: data size check failed
mkfs.xfs: filesystem failed to initialize
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.016 MB perf.data (12 samples) ]
[root@localhost ~]# perf script > perf_script_6.3.log
[root@localhost ~]# cat perf_script_6.3.log 
        mkfs.xfs  1343 [000]   576.958448:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=16 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867384 txlen=152 protect=0 raw=28 00 00 f2 1d f8 00 00 98 00)
        mkfs.xfs  1343 [000]   576.958826:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=3 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866648 txlen=32 protect=0 raw=28 00 00 f2 1b 18 00 00 20 00)
        mkfs.xfs  1343 [000]   576.958989:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=3 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779984 txlen=32 protect=0 raw=28 00 00 f0 c8 90 00 00 20 00)
        mkfs.xfs  1343 [000]   576.959062:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=6 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15780016 txlen=56 protect=0 raw=28 00 00 f0 c8 b0 00 00 38 00)
        mkfs.xfs  1343 [000]   576.959257:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779584 txlen=32 protect=0 raw=28 00 00 f0 c7 00 00 00 20 00)
        mkfs.xfs  1343 [000]   576.959328:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779616 txlen=16 protect=0 raw=28 00 00 f0 c7 20 00 00 10 00)
        mkfs.xfs  1343 [000]   576.959698:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=21 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866832 txlen=256 protect=0 raw=28 00 00 f2 1b d0 00 01 00 00)
        mkfs.xfs  1343 [000]   576.959786:    scsi:scsi_dispatch_cmd_done: host_no=0 channel=0 id=0 lun=1 data_sgl=21 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866832 txlen=256 protect=0 raw=28 00 00 f2 1b d0 00 01 00 00) result=(driver=DRIVER_OK host=0x0 message=COMMAND_COMPLETE status=0x0)
        mkfs.xfs  1343 [000]   576.959827:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=19 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867216 txlen=168 protect=0 raw=28 00 00 f2 1d 50 00 00 a8 00)
        mkfs.xfs  1343 [000]   576.960698:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=13 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867088 txlen=128 protect=0 raw=28 00 00 f2 1c d0 00 00 80 00)
        mkfs.xfs  1343 [000]   576.961047:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2048 txlen=1 protect=0 raw=28 00 00 00 08 00 00 00 01 00)
        mkfs.xfs  1343 [000]   577.004799:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=25165783 txlen=1 protect=0 raw=28 00 01 7f ff d7 00 00 01 00)

Comment 9 Andrea Perotti 2023-03-01 09:24:54 UTC
PS: last mkfs.xfs has not produced any error or message on the hypervisor


I'm going to test qemu build from #2174139

Comment 10 Andrea Perotti 2023-03-01 09:52:26 UTC
# rpm -qa | egrep "qemu-kvm|seabios|ovmf|libvirt-daemon-8"
libvirt-daemon-8.6.0-5.fc37.x86_64
seabios-bin-1.16.1-2.fc37.noarch
edk2-ovmf-20221117gitfff6d81270b5-14.fc37.noarch
qemu-kvm-7.0.0-14.fc37.x86_64       <-- #2174139
qemu-kvm-core-7.0.0-14.fc37.x86_64  <-- #2174139

kernel-6.3.0-0.rc0.20230223gita5c95ca18a98.4.fc39.x86_64

~~~

# perf record -e scsi:\* mkfs.xfs -f -K /dev/sda1
meta-data=/dev/sda1              isize=512    agcount=4, agsize=786367 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=3145467, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.018 MB perf.data (35 samples) ]
[root@localhost ~]# perf script > perf_script_6.3_BZ#2174139.log
[root@localhost ~]# cat perf_script_6.3_BZ#2174139.log 
        mkfs.xfs  1340 [001]    84.785483:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=16 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867384 txlen=152 protect=0 raw=28 00 00 f2 1d f8 00 00 98 00)
        mkfs.xfs  1340 [001]    84.785898:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866648 txlen=32 protect=0 raw=28 00 00 f2 1b 18 00 00 20 00)
        mkfs.xfs  1340 [001]    84.786058:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779984 txlen=32 protect=0 raw=28 00 00 f0 c8 90 00 00 20 00)
        mkfs.xfs  1340 [001]    84.786146:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=5 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15780016 txlen=56 protect=0 raw=28 00 00 f0 c8 b0 00 00 38 00)
        mkfs.xfs  1340 [001]    84.786344:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=4 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779584 txlen=32 protect=0 raw=28 00 00 f0 c7 00 00 00 20 00)
        mkfs.xfs  1340 [001]    84.786417:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15779616 txlen=16 protect=0 raw=28 00 00 f0 c7 20 00 00 10 00)
        mkfs.xfs  1340 [001]    84.786794:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=32 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866832 txlen=256 protect=0 raw=28 00 00 f2 1b d0 00 01 00 00)
        mkfs.xfs  1340 [001]    84.786895:    scsi:scsi_dispatch_cmd_done: host_no=0 channel=0 id=0 lun=1 data_sgl=32 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15866832 txlen=256 protect=0 raw=28 00 00 f2 1b d0 00 01 00 00) result=(driver=DRIVER_OK host=0x0 message=COMMAND_COMPLETE status=0x0)
        mkfs.xfs  1340 [001]    84.786949:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=21 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867216 txlen=168 protect=0 raw=28 00 00 f2 1d 50 00 00 a8 00)
        mkfs.xfs  1340 [001]    84.787827:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=1 data_sgl=16 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=15867088 txlen=128 protect=0 raw=28 00 00 f2 1c d0 00 00 80 00)
        mkfs.xfs  1340 [001]    84.788063:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2048 txlen=1 protect=0 raw=28 00 00 00 08 00 00 00 01 00)
        mkfs.xfs  1340 [001]    84.830904:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=25165783 txlen=1 protect=0 raw=28 00 01 7f ff d7 00 00 01 00)
        mkfs.xfs  1340 [001]    84.856548:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2049 txlen=1 protect=0 raw=28 00 00 00 08 01 00 00 01 00)
        mkfs.xfs  1340 [001]    84.856659:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2051 txlen=1 protect=0 raw=28 00 00 00 08 03 00 00 01 00)
        mkfs.xfs  1340 [001]    84.856766:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2064 txlen=8 protect=0 raw=28 00 00 00 08 10 00 00 08 00)
        mkfs.xfs  1340 [001]    84.856886:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2056 txlen=8 protect=0 raw=28 00 00 00 08 08 00 00 08 00)
        mkfs.xfs  1340 [001]    84.856999:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=6292985 txlen=1 protect=0 raw=28 00 00 60 05 f9 00 00 01 00)
        mkfs.xfs  1340 [001]    84.857105:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=6292987 txlen=1 protect=0 raw=28 00 00 60 05 fb 00 00 01 00)
        mkfs.xfs  1340 [001]    84.857215:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=6293000 txlen=8 protect=0 raw=28 00 00 60 06 08 00 00 08 00)
        mkfs.xfs  1340 [001]    84.857323:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=6292992 txlen=8 protect=0 raw=28 00 00 60 06 00 00 00 08 00)
        mkfs.xfs  1340 [001]    84.857432:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=12583921 txlen=1 protect=0 raw=28 00 00 c0 03 f1 00 00 01 00)
        mkfs.xfs  1340 [001]    84.857535:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=12583923 txlen=1 protect=0 raw=28 00 00 c0 03 f3 00 00 01 00)
        mkfs.xfs  1340 [001]    84.857639:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=12583936 txlen=8 protect=0 raw=28 00 00 c0 04 00 00 00 08 00)
        mkfs.xfs  1340 [001]    84.857747:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=12583928 txlen=8 protect=0 raw=28 00 00 c0 03 f8 00 00 08 00)
        mkfs.xfs  1340 [001]    84.857864:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=18874857 txlen=1 protect=0 raw=28 00 01 20 01 e9 00 00 01 00)
        mkfs.xfs  1340 [001]    84.857971:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=18874859 txlen=1 protect=0 raw=28 00 01 20 01 eb 00 00 01 00)
        mkfs.xfs  1340 [001]    84.858076:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=18874872 txlen=8 protect=0 raw=28 00 01 20 01 f8 00 00 08 00)
        mkfs.xfs  1340 [001]    84.858184:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=18874864 txlen=8 protect=0 raw=28 00 01 20 01 f0 00 00 08 00)
        mkfs.xfs  1340 [001]    84.858296:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2050 txlen=1 protect=0 raw=28 00 00 00 08 02 00 00 01 00)
        mkfs.xfs  1340 [001]    84.858460:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2072 txlen=8 protect=0 raw=28 00 00 00 08 18 00 00 08 00)
        mkfs.xfs  1340 [001]    84.858570:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=2 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2080 txlen=8 protect=0 raw=28 00 00 00 08 20 00 00 08 00)
        mkfs.xfs  1340 [001]    84.858678:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2048 txlen=1 protect=0 raw=28 00 00 00 08 00 00 00 01 00)
        mkfs.xfs  1340 [001]    84.858813:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=18874856 txlen=1 protect=0 raw=28 00 01 20 01 e8 00 00 01 00)
        mkfs.xfs  1340 [001]    84.858930:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=6292984 txlen=1 protect=0 raw=28 00 00 60 05 f8 00 00 01 00)
        mkfs.xfs  1340 [001]    84.862480:   scsi:scsi_dispatch_cmd_start: host_no=0 channel=0 id=0 lun=3 data_sgl=1 prot_sgl=0 prot_op=0x0 cmnd=(READ_10 lba=2048 txlen=1 protect=0 raw=28 00 00 00 08 00 00 00 01 00)


guest dmesg:

[  274.357540] XFS (sda1): Mounting V5 Filesystem
[  274.366748] XFS (sda1): Ending clean mount

mount:

/dev/sda1 on /mnt/sda1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Comment 11 Andrea Perotti 2023-03-01 11:22:05 UTC
With the release of a qemu version including a missing patch (see BZ #2174139),
I've been able to use kernel >= 6.0 on hypervisor with raw devices (partitions, LVM, LVM on top of VDO) mapped via SCSI VirtIO:
both creation of filesystems and the installation of new OSes from scratch work as before.

Thanks everybody for hints and attention: BZ closed.


Note You need to log in before you can comment on or make changes to this bug.