RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2041757 - Failed format on pass-through FC scsi-block disk in windows guest
Summary: Failed format on pass-through FC scsi-block disk in windows guest
Keywords:
Status: CLOSED DUPLICATE of bug 2022656
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: x86_64
OS: Windows
high
high
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: qing.wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-18 08:48 UTC by qing.wang
Modified: 2022-02-23 01:32 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-21 08:54:51 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-108414 0 None None None 2022-01-18 08:58:28 UTC

Description qing.wang 2022-01-18 08:48:37 UTC
Description of problem:
Pass-through FC disks as scsi-block in guest.
It failed on format ntfs quick.



Version-Release number of selected component (if applicable):

Red Hat Enterprise Linux release 9.0 Beta (Plow)
5.14.0-39.el9.x86_64
qemu-kvm-6.2.0-3.el9.x86_64
seabios-bin-1.15.0-1.el9.noarch
edk2-ovmf-20210527gite1999b264f1f-7.el9.noarch
virtio-win-prewhql-0.1-215.iso

How reproducible:
80%

Steps to Reproduce:
1.clean the fc disk (/dev/sdb)
dev=/dev/sdb;lsblk $dev;echo -e "rm 1\nquit"|parted $dev ;dd if=/dev/zero of=$dev bs=1M count=1000 oflag=direct;sleep 2;partprobe ;sleep 2;lsblk $dev

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdb      8:16   0  200G  0 disk 
\u2514\u2500sdb1   8:17   0  200G  0 part 
GNU Parted 3.4
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) rm 1                                                             
(parted) quit                                                             
Information: You may need to update /etc/fstab.

1000+0 records in                                                         
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 4.31856 s, 243 MB/s
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdb    8:16   0  200G  0 disk 

2.Boot VM with pass-through the FC disk as scsi-block 
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 12288 \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-pci-bridge-0,addr=0x1,iothread=iothread1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=native,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2.205,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device scsi-hd,id=stg0,drive=drive_image1,bootindex=1 \
    \
    -blockdev node-name=host_device_stg0,driver=host_device,auto-read-only=on,discard=unmap,aio=native,filename=/dev/sdb,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=host_device_stg0 \
    -device scsi-block,id=stg1,drive=drive_stg0,bootindex=2 \
    \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:28:26:fd:b1:df,id=idb0GbGI,netdev=idB0ljr7,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idB0ljr7,vhost=on \
    -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=native,filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-198.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=3,bus=ide.0,unit=0  \
    -vnc :5  \
    -monitor stdio \
    -qmp tcp:0:5955,server=on,wait=off \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5

3.run disk management
online,initiate with msdos mbr 
format the disk with ntfs quick  

Actual results:
format failed

Expected results:
format succeed

Additional info:
  Not hit issue in rhel guest
  Not hit issue on emulated device like as scsi_debug and iscsi , it looks like 
just related to specific hardware?
  Not hit issue on pass-through as scsi-hd

FC disk:
root@dell-per440-07 /home/kvm_autotest_root/images $ lspci|grep Fib
65:00.0 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)
65:00.1 Fibre Channel: Emulex Corporation LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter (rev 30)
sdb    8:16   0  200G  0 disk 
sdc    8:32   0  200G  0 disk

Comment 1 Peixiu Hou 2022-01-18 11:24:07 UTC
Hit the same issue for virtio-blk-pci device + a 100G LIO-ORG volume, the disk cannot be formated to ntfs normally, and it cannot be reproduced with a 10G volume:

command line as follows:

/usr/libexec/qemu-kvm -name guest=instance-00000007,debug-threads=on \
	-machine pc-i440fx-rhel7.6.0,usb=off,dump-guest-core=off \
	-accel kvm -cpu host,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off \
	-m 8192 \
	-overcommit mem-lock=off \
	-smp 4,sockets=4,dies=1,cores=1,threads=1 \
	-uuid 0905830f-496c-40b4-bd7e-7c00cb350fd0 \
	-smbios type=1,manufacturer="OpenStack Foundation",product="OpenStack Nova",version=24.0.1,serial=0905830f-496c-40b4-bd7e-7c00cb350fd0,uuid=0905830f-496c-40b4-bd7e-7c00cb350fd0,family="Virtual Machine" \
	-no-user-config -nodefaults \
	-rtc base=utc,driftfix=slew \
	-global kvm-pit.lost_tick_policy=delay \
	-no-hpet -no-shutdown -boot strict=on \
	-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
	-blockdev '{"driver":"file","filename":"/home/kvm_autotest_root/images/win2022-64-virtio.qcow2","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
	-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage"}' \
	-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on \
	-blockdev '{"driver":"host_device","filename":"/dev/sdf","aio":"native","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
	-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
	-device virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-1-format,id=virtio-disk1,write-cache=on,serial=e520a974-4bbf-4f0c-ace3-475d44be83bb \
	-netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:59:f6:7a,bus=pci.0,addr=0x3 \
	-device usb-tablet,id=input0,bus=usb.0,port=1 \
	-vnc :1 \
	-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
	-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
	-device vmcoreinfo \
	-cdrom /home/kvm_autotest_root/iso/windows/virtio-win-1.9.19-5.el9.iso \
        -msg timestamp=on


# lsscsi
[0:0:0:0]    disk    Generic- SD/MMC CRW       1.00  /dev/sda 
[1:0:0:0]    enclosu HPE      Smart Adapter    1.66  -        
[1:1:0:0]    disk    HPE      LOGICAL VOLUME   1.66  /dev/sdb 
[1:2:0:0]    storage HPE      P408i-a SR Gen10 1.66  -        
[2:0:0:0]    disk    LIO-ORG  disk1            4.0   /dev/sdc 
[2:0:0:1]    disk    LIO-ORG  disk2            4.0   /dev/sdd 
[3:0:0:0]    disk    LIO-ORG  disk1            4.0   /dev/sde 
[3:0:0:1]    disk    LIO-ORG  disk2            4.0   /dev/sdh 
[3:0:0:2]    disk    LIO-ORG  disk3            4.0   /dev/sdg 
[3:0:0:3]    disk    LIO-ORG  disk4            4.0   /dev/sdf 

Tested with virtio-scsi-pci + 100G LIO-ORG disk, we cannot reproduce this issue, disk format can be finished success.

Used version:
qemu-kvm-6.1.0-5.el9.x86_64
kernel-5.14.0-21.el9.x86_64
virtio-win-prewhql-215
seabios-bin-1.14.0-7.el9.noarch

Best Regards~
Peixiu

Comment 2 qing.wang 2022-01-19 09:20:36 UTC
If only create 4G volume in #0 testing, it will pass if the whql version before virtio-win-prewhql-205.
I also tested virtio-blk-pci like #1, but not reproduce.

Comment 3 Klaus Heinrich Kiwi 2022-01-19 17:14:12 UTC
Is this a new test? If so, at what version did it last succeed?

Kevin, I'm tentatively assigning it to you, but let me know if you think there are better candidates or if your plate is already too full

Comment 4 Kevin Wolf 2022-01-19 18:02:31 UTC
scsi-block is for Paolo, usually, so moving it to him for now.

But do I understand correctly that this issue happens only on Windows? Then it could be a Windows driver bug, CCing Vadim as well.

Comment 5 Vadim Rozenfeld 2022-01-20 00:05:37 UTC
This issue might be caused by the following commit  https://github.com/virtio-win/kvm-guest-drivers-windows/commit/c62a8a2c7bf73541fcfd6bc43bcf85147abf2620#diff-7878e1a52e8fb5a4d75b959b370267e82c07452c4dea6410c6cf32d68986207e
and should be fixed with https://bugzilla.redhat.com/show_bug.cgi?id=2022656

Peixiu, can you please check if adding ",max_sectors=64" parameter solves the problem?

Thanks,
Vadim.

Comment 6 qing.wang 2022-01-20 09:31:29 UTC
Thank Vadim Rozenfeld.

it works  after adding option ",max_sectors=64" :

-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-pci-bridge-0,addr=0x1,iothread=iothread1,max_sectors=64 \

So i have some questions :
1. It looks like related to specific HW, i can not reproduce it on the emulated iscsi disk.
When should add this option and what is the value?

2. There is no issue for rhel guest, why have issue on windows guest?

3. If we have to add option for it, do we need documentation ?

Comment 7 Peixiu Hou 2022-01-20 09:50:45 UTC
(In reply to Vadim Rozenfeld from comment #5)
> This issue might be caused by the following commit 
> https://github.com/virtio-win/kvm-guest-drivers-windows/commit/
> c62a8a2c7bf73541fcfd6bc43bcf85147abf2620#diff-
> 7878e1a52e8fb5a4d75b959b370267e82c07452c4dea6410c6cf32d68986207e
> and should be fixed with https://bugzilla.redhat.com/show_bug.cgi?id=2022656
> 
> Peixiu, can you please check if adding ",max_sectors=64" parameter solves
> the problem?
> 
For virtio-scsi part, thanks qinwang's testing.
For virtio-blk, it seems not support max_sectors param, hit follows error:
2022-01-20T09:37:22.133360Z qemu-kvm: -device virtio-blk-pci,bus=pci.0,addr=0x4,max_sectors=64,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on: Property 'virtio-blk-pci.max_sectors' not found

And for virtio-blk, it works with 10G volume disk, but doesn't work with 100G volume disk, it seems it's an another issue, do we need to file another bz to track it? thanks~

> Thanks,
> Vadim.

Comment 8 Vadim Rozenfeld 2022-01-20 11:20:24 UTC
(In reply to qing.wang from comment #6)
> Thank Vadim Rozenfeld.
> 
> it works  after adding option ",max_sectors=64" :
> 
> -device
> virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-pci-bridge-0,addr=0x1,
> iothread=iothread1,max_sectors=64 \
> 
> So i have some questions :
> 1. It looks like related to specific HW, i can not reproduce it on the
> emulated iscsi disk.
> When should add this option and what is the value?

In case of the scsi-block the best thing is to check the max_segments
value of a specific block device 
cat /sys/block/sdb/queue/max_segments

and adjust the qemu ",max_sectors" parameter accordingly

> 
> 2. There is no issue for rhel guest, why have issue on windows guest?

The recent Windows platforms are capable to handle quite large disk IO 
DMA transfers (up to 512 entries in the SG List). That gives us a good
performance improvment for 256K and bigger blocks disk transfers. That 
was the main reason to enlarge the maximum transfer size and the number 
of physical breaks in vioscsi driver.

> 
> 3. If we have to add option for it, do we need documentation ?
Yes. It needs to be documented.

Comment 9 Vadim Rozenfeld 2022-01-20 11:25:59 UTC
(In reply to Peixiu Hou from comment #7)
> (In reply to Vadim Rozenfeld from comment #5)
> > This issue might be caused by the following commit 
> > https://github.com/virtio-win/kvm-guest-drivers-windows/commit/
> > c62a8a2c7bf73541fcfd6bc43bcf85147abf2620#diff-
> > 7878e1a52e8fb5a4d75b959b370267e82c07452c4dea6410c6cf32d68986207e
> > and should be fixed with https://bugzilla.redhat.com/show_bug.cgi?id=2022656
> > 
> > Peixiu, can you please check if adding ",max_sectors=64" parameter solves
> > the problem?
> > 
> For virtio-scsi part, thanks qinwang's testing.
> For virtio-blk, it seems not support max_sectors param, hit follows error:
> 2022-01-20T09:37:22.133360Z qemu-kvm: -device
> virtio-blk-pci,bus=pci.0,addr=0x4,max_sectors=64,drive=libvirt-2-format,
> id=virtio-disk0,bootindex=1,write-cache=on: Property
> 'virtio-blk-pci.max_sectors' not found
> 
> And for virtio-blk, it works with 10G volume disk, but doesn't work with
> 100G volume disk, it seems it's an another issue, do we need to file another
> bz to track it? thanks~

Yes, please.
Technically, both viostor and vioscsi drivers have a registry parameter to limit the
maximum transfer length, but IIRC the default value is 256 PhysicalBreaks, which
can be too much in some cases of direct LUN configuration.  


> 
> > Thanks,
> > Vadim.

Comment 10 qing.wang 2022-01-21 08:54:10 UTC
After investigating ,this bug should same as https://bugzilla.redhat.com/show_bug.cgi?id=2022656
But the blk issue looks like different.

-blockdev '{"driver":"host_device","filename":"/dev/sdf","aio":"native","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-1-format,id=virtio-disk1,write-cache=on,serial=e520a974-4bbf-4f0c-ace3-475d44be83bb \

 phou, could you please help to open a bug to track it,thanks.

Comment 11 qing.wang 2022-01-21 08:54:51 UTC

*** This bug has been marked as a duplicate of bug 2022656 ***

Comment 12 Peixiu Hou 2022-01-21 11:33:10 UTC
(In reply to qing.wang from comment #10)
> After investigating ,this bug should same as
> https://bugzilla.redhat.com/show_bug.cgi?id=2022656
> But the blk issue looks like different.
> 
> -blockdev
> '{"driver":"host_device","filename":"/dev/sdf","aio":"native","node-name":
> "libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-
> only":true,"discard":"unmap"}' \
> -blockdev
> '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,
> "no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
> -device
> virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-1-format,id=virtio-disk1,
> write-cache=on,serial=e520a974-4bbf-4f0c-ace3-475d44be83bb \
> 
>  phou, could you please help to open a bug to track it,thanks.

Hi qinwang, Filed a new bug: https://bugzilla.redhat.com/show_bug.cgi?id=2043493, and will add a new test case based on the new bz, thanks a lot~


Note You need to log in before you can comment on or make changes to this bug.