Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1538546

Summary: Disks are offline after reboot in windows guest
Product: Red Hat Enterprise Linux 7 Reporter: Xueqiang Wei <xuwei>
Component: qemu-kvm-rhevAssignee: Vadim Rozenfeld <vrozenfe>
Status: CLOSED NOTABUG QA Contact: Xueqiang Wei <xuwei>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.5CC: ailan, aliang, chayang, coli, juzhang, knoel, lijin, michen, ngu, phou, virt-maint, xuwei
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-11 13:33:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
cmd file
none
detail log none

Description Xueqiang Wei 2018-01-25 09:26:14 UTC
Created attachment 1385977 [details]
cmd file

Description of problem:

Passthrough 9 disks to windows guest, set them to online. 8 of them are offline after reboot.



Version-Release number of selected component (if applicable):

Host:
kernel-3.10.0-836.el7.x86_64
qemu-kvm-rhev-2.10.0-18.el7
Guest:
windows2016
virtio-win-1.9.4-2.el7.iso


How reproducible:
100%


Steps to Reproduce:
1. create 9 disks on host
  # modprobe -r scsi_debug
  # modprobe sg
  # modprobe scsi_debug add_host=9 dev_size_mb=40

2. passthrough 9 disk
  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine q35  \
    -nodefaults  \
    -vga std \
    -device i82801b11-bridge,id=dmi2pci_bridge,bus=pcie.0,addr=0x2 \
    -device pci-bridge,id=pci_bridge,bus=dmi2pci_bridge,addr=0x1,chassis_nr=1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_mRtpTw/monitor-qmpmonitor1-20180105-231825-lrq9UVzQ,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_mRtpTw/monitor-catch_monitor-20180105-231825-lrq9UVzQ,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idymSH2J  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/avocado_mRtpTw/serial-serial0-20180105-231825-lrq9UVzQ,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20180105-231825-lrq9UVzQ,path=/var/tmp/avocado_mRtpTw/seabios-20180105-231825-lrq9UVzQ,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20180105-231825-lrq9UVzQ,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pcie.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pcie.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pcie.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-3,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2016-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0 \
    -drive id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdl \
    -device scsi-block,id=stg0,drive=drive_stg0 \
    -drive id=drive_stg1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdm \
    -device scsi-block,id=stg1,drive=drive_stg1 \
    -drive id=drive_stg2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdn \
    -device scsi-block,id=stg2,drive=drive_stg2 \
    -drive id=drive_stg3,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdo \
    -device scsi-block,id=stg3,drive=drive_stg3 \
    -drive id=drive_stg4,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdp \
    -device scsi-block,id=stg4,drive=drive_stg4 \
    -drive id=drive_stg5,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdq \
    -device scsi-block,id=stg5,drive=drive_stg5 \
    -drive id=drive_stg6,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdr \
    -device scsi-block,id=stg6,drive=drive_stg6 \
    -drive id=drive_stg7,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sds \
    -device scsi-block,id=stg7,drive=drive_stg7 \
    -drive id=drive_stg8,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdt \
    -device scsi-block,id=stg8,drive=drive_stg8 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:ee:ef:f0:f1:f2,id=idpLQXuZ,vectors=4,netdev=idcK7vPy,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idcK7vPy,vhost=on \
    -m 2046  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device scsi-cd,id=cd1,drive=drive_cd1 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \

3. check disks status by diskpart
   
  C:\Users\Administrator\Desktop>diskpart

  Microsoft DiskPart version 10.0.14393.0

  Copyright (C) 1999-2013 Microsoft Corporation.
  On computer: WIN-LDLME22LBAI

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          30 GB    0 B
  Disk 1    Offline         40 MB   40 MB
  Disk 2    Offline         40 MB   40 MB
  Disk 3    Offline         40 MB   40 MB
  Disk 4    Offline         40 MB   40 MB
  Disk 5    Offline         40 MB   40 MB
  Disk 6    Offline         40 MB   40 MB
  Disk 7    Offline         40 MB   40 MB
  Disk 8    Offline         40 MB   40 MB
  Disk 9    Offline         40 MB   40 MB

DISKPART> san

SAN Policy  : Offline Shared


4. clear read-only, online them then format them

Commands are saved in "format-disk-cmd.txt", please refer to attachment.

C:\Users\Administrator\Desktop>diskpart /s format-disk-cmd.txt

cmd:
DISKPART> san policy=OnlineAll
DISKPART> list disk    
DISKPART> select disk 1    
DISKPART> attributes disk clear readonly    
DISKPART> select disk 1  
DISKPART> online disk    
...... 

5. reboot guest and check disks status


Actual results:

After step 5, only disk 1 is online. For details, please refer to attachment.

DISKPART> san

SAN Policy  : Online All

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Online           40 MB      0 B
  Disk 2    Offline          40 MB      0 B
  Disk 3    Offline          40 MB      0 B
  Disk 4    Offline          40 MB      0 B
  Disk 5    Offline          40 MB      0 B
  Disk 6    Offline          40 MB      0 B
  Disk 7    Offline          40 MB      0 B
  Disk 8    Offline          40 MB      0 B
  Disk 9    Offline          40 MB      0 B

Expected results:
9 disks are all online.


Additional info:

Comment 2 Xueqiang Wei 2018-01-25 09:28:05 UTC
Created attachment 1385978 [details]
detail log

Comment 3 Xueqiang Wei 2018-01-25 10:21:48 UTC
If format 9 disks one by one. When "online disk 2", hit issue "DiskPart has encountered an error: Incorrect function."

Bug 1538559 - Online disk: DiskPart has encountered an error: Incorrect function.

Comment 4 Xueqiang Wei 2018-01-25 10:26:00 UTC
pass through generic scsi device, also hit this issue.

e.g.
-drive id=drive_stg0,if=none,snapshot=off,aio=threads,cache=writethrough,format=raw,file=/dev/sg3 \
    -device scsi-generic,id=stg0,drive=drive_stg0 \

Comment 5 Xueqiang Wei 2018-01-25 11:30:42 UTC
tested with iscsi backend, not hit this issue. But hit [1].
(Define iSCSI LUN on another host and connect to it.)

[1] Bug 1538559 - Online disk: DiskPart has encountered an error: Incorrect function.


Details:
set disk 1 to disk 4 online, when set disk 5 online, hit [1].
reboot guest the disk 1 to disk 4 are also online.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Online         2048 MB  1984 KB
  Disk 2    Online         2048 MB  1984 KB
  Disk 3    Online         2048 MB  2046 MB
  Disk 4    Online         2048 MB  1984 KB
* Disk 5    Offline        2048 MB  1984 KB
  Disk 6    Offline        2048 MB  1984 KB
  Disk 7    Offline        2048 MB  2046 MB
  Disk 8    Offline        2048 MB  1984 KB

DISKPART> exit

Leaving DiskPart...

C:\Users\Administrator\Desktop>shutdown /r /t 5


C:\Users\Administrator>diskpart

Microsoft DiskPart version 10.0.14393.0

Copyright (C) 1999-2013 Microsoft Corporation.
On computer: WIN-ETQT2A6CA5V

DISKPART> san

SAN Policy  : Online All

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Online         2048 MB  1984 KB
  Disk 2    Online         2048 MB  1984 KB
  Disk 3    Online         2048 MB  2046 MB
  Disk 4    Online         2048 MB  1984 KB
  Disk 5    Offline        2048 MB  1984 KB
  Disk 6    Offline        2048 MB  1984 KB
  Disk 7    Offline        2048 MB  2046 MB
  Disk 8    Offline        2048 MB  1984 KB

DISKPART>

Comment 6 Xueqiang Wei 2018-01-25 12:04:36 UTC
Tested on RHEL.7.4.z, also hit this issue

Host:
kernel-3.10.0-693.14.1.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.12

Guest:
windows2016

Comment 7 Peixiu Hou 2018-01-25 12:10:14 UTC
Hi all,

Tried to test with virtio-win-1.9.3-1.el7.noarch.rpm(rhel7.4 released version), steps as comment#0, used scsi_debug disks, it can be reproduced. So it is not a virtio-win regression bug.

Also tried test with LIO-ORG disks, cannot reproduce this issue(tried with virtio-win-1.9.3-1.el7 and virtio-win-1.9.4-2.el7).

the disk info as follows:
[root@dhcp-8-241 virtio-win]# lsscsi
[0:0:0:0]    disk    ATA      ST500DM002-1BD14 KC45  /dev/sda 
[1:0:0:0]    cd/dvd  TSSTcorp DVD+-RW SH-216BB D100  /dev/sr0 
[4:0:0:0]    disk    IBM      2145             0000  /dev/sdb 
[4:0:0:1]    disk    IBM      2145             0000  /dev/sdc 
[4:0:0:2]    disk    IBM      2145             0000  /dev/sdd 
[4:0:0:3]    disk    IBM      2145             0000  /dev/sde 
[4:0:0:4]    disk    IBM      2145             0000  /dev/sdf 
[23:0:0:0]   disk    LIO-ORG  disk0            4.0   /dev/sdg 
[23:0:0:1]   disk    LIO-ORG  disk1            4.0   /dev/sdh 
[23:0:0:2]   disk    LIO-ORG  disk2            4.0   /dev/sdi 
[23:0:0:3]   disk    LIO-ORG  disk3            4.0   /dev/sdj 
[23:0:0:4]   disk    LIO-ORG  disk4            4.0   /dev/sdk 
[23:0:0:5]   disk    LIO-ORG  disk5            4.0   /dev/sdl 
[23:0:0:6]   disk    LIO-ORG  disk6            4.0   /dev/sdm 
[23:0:0:7]   disk    LIO-ORG  disk7            4.0   /dev/sdn 
[23:0:0:8]   disk    LIO-ORG  disk8            4.0   /dev/sdo 
[23:0:0:9]   disk    LIO-ORG  disk9            4.0   /dev/sdp

Other used versions:
kernel-3.10.0-836.el7.x86_64
qemu-kvm-rhev-2.10.0-18.el7


Best Regards~
Peixiu

Comment 8 Xueqiang Wei 2018-01-26 03:10:30 UTC
the disk info on host as follows:

# lsscsi
[0:0:0:0]    disk    ATA      ST500NM0011      BB46  -        
[0:1:0:0]    disk    LSI      Logical Volume   3000  /dev/sda 
[5:0:0:0]    cd/dvd  IBM SATA DEVICE 81Y3666   IB00  /dev/sr0 
[13:0:0:0]   disk    IBM      2145             0000  /dev/sdb 
[13:0:0:1]   disk    IBM      2145             0000  /dev/sdc 
[13:0:0:2]   disk    IBM      2145             0000  /dev/sdd 
[13:0:0:3]   disk    IBM      2145             0000  /dev/sde 
[13:0:0:4]   disk    IBM      2145             0000  /dev/sdf 
[14:0:0:0]   disk    IBM      2145             0000  /dev/sdg 
[14:0:0:1]   disk    IBM      2145             0000  /dev/sdh 
[14:0:0:2]   disk    IBM      2145             0000  /dev/sdi 
[14:0:0:3]   disk    IBM      2145             0000  /dev/sdj 
[14:0:0:4]   disk    IBM      2145             0000  /dev/sdk 
[93:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdl 
[94:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdm 
[95:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdn 
[96:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdo 
[97:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdp 
[98:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdq 
[99:0:0:0]   disk    Linux    scsi_debug       0004  /dev/sdr 
[100:0:0:0]  disk    Linux    scsi_debug       0004  /dev/sds 
[101:0:0:0]  disk    Linux    scsi_debug       0004  /dev/sdt

Comment 9 Amnon Ilan 2018-01-27 15:36:49 UTC
Can it be related to bug#1528502

Comment 10 Vadim Rozenfeld 2018-01-27 21:10:07 UTC
(In reply to Amnon Ilan from comment #9)
> Can it be related to bug#1528502

I don't think so. I'm trying to check if the same device id data exposed multiple 
times by different disks. If so, it might be the root of the problem.

Comment 11 Vadim Rozenfeld 2018-01-30 00:37:55 UTC
Any particular reason for choosing scsi_debug as a backend for testing "-device scsi-block" configuration? I'm afraid that this configuration will not work properly. scsi-block has no serial property because it uses the physical drive serial number. But serial number reported by scsi_debug seems to be not unique across the disks. Can we try using LIO Target instead of scsi_debug? This configuration works way better than scsi_debug.

> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................................. [Storage Objects: 9]
  | | o- lun1 ....................................................... [/home/vrozenfe/luns/lun1.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun2 ....................................................... [/home/vrozenfe/luns/lun2.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun3 ....................................................... [/home/vrozenfe/luns/lun3.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun4 ....................................................... [/home/vrozenfe/luns/lun4.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun5 ....................................................... [/home/vrozenfe/luns/lun5.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun6 ....................................................... [/home/vrozenfe/luns/lun6.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun7 ....................................................... [/home/vrozenfe/luns/lun7.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun8 ....................................................... [/home/vrozenfe/luns/lun8.img (256.0MiB) write-back activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- lun9 ....................................................... [/home/vrozenfe/luns/lun9.img (256.0MiB) write-back activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 9]
  | o- naa.5001405138423b64 ................................................................................. [naa.5001405abfda579f]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun5 (/home/vrozenfe/luns/lun5.img) (default_tg_pt_gp)]
  | o- naa.50014051a4569d92 ................................................................................. [naa.500140574b9d5066]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun4 (/home/vrozenfe/luns/lun4.img) (default_tg_pt_gp)]
  | o- naa.50014052154542cb ................................................................................. [naa.50014050a0a836f5]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun3 (/home/vrozenfe/luns/lun3.img) (default_tg_pt_gp)]
  | o- naa.500140562ebf2e01 ................................................................................. [naa.5001405bcacd5595]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun9 (/home/vrozenfe/luns/lun9.img) (default_tg_pt_gp)]
  | o- naa.5001405956dc5016 ................................................................................. [naa.50014053c55e1891]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun2 (/home/vrozenfe/luns/lun2.img) (default_tg_pt_gp)]
  | o- naa.5001405a65008717 ................................................................................. [naa.500140597daed71e]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun8 (/home/vrozenfe/luns/lun8.img) (default_tg_pt_gp)]
  | o- naa.5001405cac5a1059 ................................................................................. [naa.50014056fd553b85]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun7 (/home/vrozenfe/luns/lun7.img) (default_tg_pt_gp)]
  | o- naa.5001405f169ee60f ................................................................................. [naa.5001405217f01c32]
  | | o- luns ............................................................................................................ [LUNs: 1]
  | |   o- lun0 .................................................... [fileio/lun1 (/home/vrozenfe/luns/lun1.img) (default_tg_pt_gp)]
  | o- naa.5001405fcf56cb7e ................................................................................. [naa.500140597aa002de]
  |   o- luns ............................................................................................................ [LUNs: 1]
  |     o- lun0 .................................................... [fileio/lun6 (/home/vrozenfe/luns/lun6.img) (default_tg_pt_gp)]
  o- vhost ............................................................................................................ [Targets: 0]



vrozenfe@jack vms]$ lsscsi
[0:0:0:0]    disk    ATA      WDC WD5000LPLX-7 1A01  /dev/sda 
[1:0:0:0]    disk    ATA      SK hynix SH920 2 CL00  /dev/sdb 
[4:0:1:0]    disk    LIO-ORG  lun1             4.0   /dev/sdc 
[5:0:1:0]    disk    LIO-ORG  lun2             4.0   /dev/sdd 
[6:0:1:0]    disk    LIO-ORG  lun3             4.0   /dev/sde 
[7:0:1:0]    disk    LIO-ORG  lun4             4.0   /dev/sdf 
[8:0:1:0]    disk    LIO-ORG  lun5             4.0   /dev/sdg 
[9:0:1:0]    disk    LIO-ORG  lun6             4.0   /dev/sdh 
[10:0:1:0]   disk    LIO-ORG  lun7             4.0   /dev/sdi 
[11:0:1:0]   disk    LIO-ORG  lun8             4.0   /dev/sdj 
[12:0:1:0]   disk    LIO-ORG  lun9             4.0   /dev/sdk 



Thanks,
Vadim.

Comment 12 Xueqiang Wei 2018-01-30 03:28:02 UTC
(In reply to Vadim Rozenfeld from comment #11)
> Any particular reason for choosing scsi_debug as a backend for testing
> "-device scsi-block" configuration? I'm afraid that this configuration will
> not work properly. scsi-block has no serial property because it uses the
> physical drive serial number. But serial number reported by scsi_debug seems
> to be not unique across the disks. Can we try using LIO Target instead of
> scsi_debug? This configuration works way better than scsi_debug.
> 
> > ls
> o- /
> .............................................................................
> ............................................ [...]
>   o- backstores
> .............................................................................
> ................................. [...]
>   | o- block
> .............................................................................
> ..................... [Storage Objects: 0]
>   | o- fileio
> .............................................................................
> .................... [Storage Objects: 9]
>   | | o- lun1 .......................................................
> [/home/vrozenfe/luns/lun1.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun2 .......................................................
> [/home/vrozenfe/luns/lun2.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun3 .......................................................
> [/home/vrozenfe/luns/lun3.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun4 .......................................................
> [/home/vrozenfe/luns/lun4.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun5 .......................................................
> [/home/vrozenfe/luns/lun5.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun6 .......................................................
> [/home/vrozenfe/luns/lun6.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun7 .......................................................
> [/home/vrozenfe/luns/lun7.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun8 .......................................................
> [/home/vrozenfe/luns/lun8.img (256.0MiB) write-back activated]
>   | | | o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | | |   o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | | o- lun9 .......................................................
> [/home/vrozenfe/luns/lun9.img (256.0MiB) write-back activated]
>   | |   o- alua
> .............................................................................
> ...................... [ALUA Groups: 1]
>   | |     o- default_tg_pt_gp
> .......................................................................
> [ALUA state: Active/optimized]
>   | o- pscsi
> .............................................................................
> ..................... [Storage Objects: 0]
>   | o- ramdisk
> .............................................................................
> ................... [Storage Objects: 0]
>   o- iscsi
> .............................................................................
> ............................... [Targets: 0]
>   o- loopback
> .............................................................................
> ............................ [Targets: 9]
>   | o- naa.5001405138423b64
> .............................................................................
> .... [naa.5001405abfda579f]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun5 (/home/vrozenfe/luns/lun5.img) (default_tg_pt_gp)]
>   | o- naa.50014051a4569d92
> .............................................................................
> .... [naa.500140574b9d5066]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun4 (/home/vrozenfe/luns/lun4.img) (default_tg_pt_gp)]
>   | o- naa.50014052154542cb
> .............................................................................
> .... [naa.50014050a0a836f5]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun3 (/home/vrozenfe/luns/lun3.img) (default_tg_pt_gp)]
>   | o- naa.500140562ebf2e01
> .............................................................................
> .... [naa.5001405bcacd5595]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun9 (/home/vrozenfe/luns/lun9.img) (default_tg_pt_gp)]
>   | o- naa.5001405956dc5016
> .............................................................................
> .... [naa.50014053c55e1891]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun2 (/home/vrozenfe/luns/lun2.img) (default_tg_pt_gp)]
>   | o- naa.5001405a65008717
> .............................................................................
> .... [naa.500140597daed71e]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun8 (/home/vrozenfe/luns/lun8.img) (default_tg_pt_gp)]
>   | o- naa.5001405cac5a1059
> .............................................................................
> .... [naa.50014056fd553b85]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun7 (/home/vrozenfe/luns/lun7.img) (default_tg_pt_gp)]
>   | o- naa.5001405f169ee60f
> .............................................................................
> .... [naa.5001405217f01c32]
>   | | o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   | |   o- lun0 ....................................................
> [fileio/lun1 (/home/vrozenfe/luns/lun1.img) (default_tg_pt_gp)]
>   | o- naa.5001405fcf56cb7e
> .............................................................................
> .... [naa.500140597aa002de]
>   |   o- luns
> .............................................................................
> ............................... [LUNs: 1]
>   |     o- lun0 ....................................................
> [fileio/lun6 (/home/vrozenfe/luns/lun6.img) (default_tg_pt_gp)]
>   o- vhost
> .............................................................................
> ............................... [Targets: 0]
> 
> 
> 
> vrozenfe@jack vms]$ lsscsi
> [0:0:0:0]    disk    ATA      WDC WD5000LPLX-7 1A01  /dev/sda 
> [1:0:0:0]    disk    ATA      SK hynix SH920 2 CL00  /dev/sdb 
> [4:0:1:0]    disk    LIO-ORG  lun1             4.0   /dev/sdc 
> [5:0:1:0]    disk    LIO-ORG  lun2             4.0   /dev/sdd 
> [6:0:1:0]    disk    LIO-ORG  lun3             4.0   /dev/sde 
> [7:0:1:0]    disk    LIO-ORG  lun4             4.0   /dev/sdf 
> [8:0:1:0]    disk    LIO-ORG  lun5             4.0   /dev/sdg 
> [9:0:1:0]    disk    LIO-ORG  lun6             4.0   /dev/sdh 
> [10:0:1:0]   disk    LIO-ORG  lun7             4.0   /dev/sdi 
> [11:0:1:0]   disk    LIO-ORG  lun8             4.0   /dev/sdj 
> [12:0:1:0]   disk    LIO-ORG  lun9             4.0   /dev/sdk 
> 
> 
> 
> Thanks,
> Vadim.


(1) No particular reason for choosing scsi_debug as a backend for testing. I hit this issue by auto test case. In multi-disk test cases in Avocado, the backend is scsi_debug. Need to create hundreds of disks in some cases. 

(2) Tested with LIO-ORG disks, not hit this issue.

Comment 13 Xueqiang Wei 2018-01-30 05:47:15 UTC
Tested with scsi-disk and serial number (scsi_debug backend), also hit this issue.


CMD:

/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine q35  \
    -nodefaults  \
    -vga std \
    -device i82801b11-bridge,id=dmi2pci_bridge,bus=pcie.0,addr=0x2 \
    -device pci-bridge,id=pci_bridge,bus=dmi2pci_bridge,addr=0x1,chassis_nr=1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_mRtpTw/monitor-qmpmonitor1-20180105-231825-lrq9UVzQ,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_mRtpTw/monitor-catch_monitor-20180105-231825-lrq9UVzQ,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idymSH2J  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/avocado_mRtpTw/serial-serial0-20180105-231825-lrq9UVzQ,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20180105-231825-lrq9UVzQ,path=/var/tmp/avocado_mRtpTw/seabios-20180105-231825-lrq9UVzQ,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20180105-231825-lrq9UVzQ,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pcie.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pcie.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pcie.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-3,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2016-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0 \
    -drive id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdl \
    -device scsi-disk,id=stg0,drive=drive_stg0,serial=disk0 \
    -drive id=drive_stg1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdm \
    -device scsi-disk,id=stg1,drive=drive_stg1,serial=disk1 \
    -drive id=drive_stg2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdn \
    -device scsi-disk,id=stg2,drive=drive_stg2,serial=disk2 \
    -drive id=drive_stg3,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdo \
    -device scsi-disk,id=stg3,drive=drive_stg3,serial=disk3 \
    -drive id=drive_stg4,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdp \
    -device scsi-disk,id=stg4,drive=drive_stg4,serial=disk4 \
    -drive id=drive_stg5,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdq \
    -device scsi-disk,id=stg5,drive=drive_stg5,serial=disk5 \
    -drive id=drive_stg6,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdr \
    -device scsi-disk,id=stg6,drive=drive_stg6,serial=disk6 \
    -drive id=drive_stg7,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sds \
    -device scsi-disk,id=stg7,drive=drive_stg7,serial=disk7 \
    -drive id=drive_stg8,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/dev/sdt \
    -device scsi-disk,id=stg8,drive=drive_stg8,serial=disk8 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:ee:ef:f0:f1:f2,id=idpLQXuZ,vectors=4,netdev=idcK7vPy,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idcK7vPy,vhost=on \
    -m 2046  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device scsi-cd,id=cd1,drive=drive_cd1 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \


After reboot, only Disk 1 is online.

C:\Users\Administrator>diskpart

Microsoft DiskPart version 10.0.14393.0

Copyright (C) 1999-2013 Microsoft Corporation.
On computer: WIN-ETQT2A6CA5V

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Online           40 MB      0 B
  Disk 2    Offline          40 MB      0 B
  Disk 3    Offline          40 MB      0 B
  Disk 4    Offline          40 MB      0 B
  Disk 5    Offline          40 MB      0 B
  Disk 6    Offline          40 MB      0 B
  Disk 7    Offline          40 MB      0 B
  Disk 8    Offline          40 MB      0 B
  Disk 9    Offline          40 MB      0 B

DISKPART>

C:\Users\Administrator\Desktop>hddsn.exe M:

            ***** STORAGE DEVICE DESCRIPTOR DATA *****
              Version: 00000028
            TotalSize: 0000018c
           DeviceType: 00000000
   DeviceTypeModifier: 00000000
       RemovableMedia: False
      CommandQueueing: True
            Vendor Id: QEMU
           Product Id: QEMU HARDDISK
     Product Revision: 2.5+
        Serial Number: disk0
             Bus Type: Not Defined
       Raw Properties: None

Comment 14 Vadim Rozenfeld 2018-02-01 02:49:17 UTC
but the disk signature (offset 0x01B8) in MBR (https://technet.microsoft.com/en-us/library/cc976786.aspx?f=255&MSPPError=-2147217396) for different disk created by scsi_debug still seems to be the same (81 d4 56 26 in my case)

[vrozenfe@jack vms]$ sudo dd if=/dev/sdc bs=512 count=1 | hexdump -C
1+0 records in
1+0 records out
00000000  33 c0 8e d0 bc 00 7c 8e  c0 8e d8 be 00 7c bf 00  |3.....|......|..|

000001b0  65 6d 00 00 00 63 7b 9a  81 d4 56 26 86 0f 00 00  |em...c{...V&....|
000001c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|

[vrozenfe@jack vms]$ sudo dd if=/dev/sdd bs=512 count=1 | hexdump -C
1+0 records in
1+0 records out
00000000  33 c0 8e d0 bc 00 7c 8e  c0 8e d8 be 00 7c bf 00  |3.....|......|..|

000001b0  65 6d 00 00 00 63 7b 9a  81 d4 56 26 86 0f 00 00  |em...c{...V&....|
000001c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|



Looks like scsi_debug shares the same MBR between different disks which makes impossible for Windows to distinguish between different disks (because Windows 
needs unique disk ids for maintaining mount table in BCD database). So, after restart, Windows will read the same disk id for all scsi_debug attached disk, will online the first on and indicates collision for the rest. It is still possible to online the disks after clearing readonly attribute and assigning unique id to each and every disk (can be done from diskpart). But this magic will disappear on the following reboot.

Best,
Vadim.

Comment 15 Amnon Ilan 2018-02-05 10:41:54 UTC
Based on the comments above, this seems like a problem in the test setup 
rather than a real bug.
Can we close it as not-a-bug?