Bug 1111784 (RHEV_SCSI_reserve_Win_SharedDisk)

Summary: [RFE] Provide SCSI reservation support for virtio-scsi via rhev-guest-tools for win-8 and win-2012 guests using Shared disks [blocked on platform bug 1452210 ]
Product: Red Hat Enterprise Virtualization Manager Reporter: Anand Nande <anande>
Component: vdsmAssignee: Michal Skrivanek <michal.skrivanek>
Status: CLOSED ERRATA QA Contact: Petr Matyáš <pmatyas>
Severity: high Docs Contact:
Priority: high    
Version: unspecifiedCC: ailan, avettath, bmarzins, boruvka.michal, buettner, coli, coughlan, cshao, dyuan, ebenahar, ekin.meroglu, emarcus, fdelorey, gchakkar, han.pilmeyer, jferlan, jsuchane, kgoldbla, knoel, lijin, lpeer, lsurette, lsvaty, mavital, mchristi, michal.skrivanek, michen, mjankula, mkalinin, mprivozn, mtessun, mzhan, pablo.iranzo, pbonzini, phou, pilux, pstehlik, pvilayat, rbarry, redhat, rtamir, scohen, sraje, srevivo, tjelinek, tnisan, usurse, vanhoof, vrozenfe, wyu, xiagao, xuwei, xuzhang, ycui, yisun
Target Milestone: ovirt-4.3.0Keywords: FutureFeature, TestOnly, Tracking
Target Release: ---Flags: sherold: Triaged+
jbelka: testing_plan_complete-
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
The current release supports Windows clustering for directly attached LUNs and shared disks.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-08 12:35:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1159420, 1195140, 1426544, 1452210, 1464908, 1470007, 1484075, 1519019, 1519021    
Bug Blocks: 769712, 1520566, 1523346    
Attachments:
Description Flags
screenshot - scsi reservation
none
Validate SCSI-3 Persistent Reservation Failed
none
Failover cluster validation report with rhvm plus rhel7.3
none
iscsi trace
none
direct lun trace
none
Failover cluster validation report 2017_2_23
none
vm paused after attached fiberchannel direct lun with SCSI passthrough enable
none
cannot choose the same iscsi target to login to another host none

Comment 1 Itamar Heim 2014-06-23 08:44:36 UTC
ronen - thoughts/comments?
(since shared disks are just raw disks on a shared storage domain of any type, not sure how this one is expected to work, when the underlying storage could be anything, not just scsi)

Comment 2 Ronen Hod 2014-06-23 09:28:21 UTC
(In reply to Itamar Heim from comment #1)
> ronen - thoughts/comments?
> (since shared disks are just raw disks on a shared storage domain of any
> type, not sure how this one is expected to work, when the underlying storage
> could be anything, not just scsi)

Redirecting the question to the virtio-scsi experts, Paolo and Vadim.
BTW, what versions of RHEV started the support for virtio-scsi on Windows?

Comment 3 Allon Mureinik 2014-06-23 09:51:57 UTC
(In reply to Ronen Hod from comment #2)
> (In reply to Itamar Heim from comment #1)
> BTW, what versions of RHEV started the support for virtio-scsi on Windows?
3.3.0

Comment 4 Vadim Rozenfeld 2014-06-23 10:24:40 UTC
(In reply to Ronen Hod from comment #2)
> (In reply to Itamar Heim from comment #1)
> > ronen - thoughts/comments?
> > (since shared disks are just raw disks on a shared storage domain of any
> > type, not sure how this one is expected to work, when the underlying storage
> > could be anything, not just scsi)
> 
> Redirecting the question to the virtio-scsi experts, Paolo and Vadim.
> BTW, what versions of RHEV started the support for virtio-scsi on Windows?

I don't think that anything special should be done in virtio-scsi Windows driver code to provide scsi reservation support. Paolo, please correct me if I'm wrong, but it seems to be a qemu limitation.

Thanks,
Vadim.

Comment 5 Paolo Bonzini 2014-06-23 12:23:15 UTC
Vadim is correct.

In order to support persistent reservations, disks must be marked as DirectLUN so that QEMU does SCSI passthrough.

Also, you need to enable "unfiltered" SG_IO access.  This is done in Libvirt with sgio='unfiltered', I'm not sure how it is done in RHEV.

That said, failover clustering on Windows requires the driver to show up as a SAS, iSCSI or FC driver.  Currently, vioscsi.sys doesn't do that.  So it's possible that Microsoft failover cluster will not work on virtio-scsi even if sg_persist working on Linux.

Comment 6 Pablo Iranzo Gómez 2014-06-26 11:01:01 UTC
Will then we need to file another RFE for vioscsi.sys ? or can be worked as part of this RFE?

Thanks!

Comment 8 Allon Mureinik 2015-02-22 10:10:57 UTC
From RHEV's side, we need a couple of things.

First, As Paolo suggested:
(In reply to Paolo Bonzini from comment #5)
> Vadim is correct.
> In order to support persistent reservations, disks must be marked as
> DirectLUN so that QEMU does SCSI passthrough.
ack.

> Also, you need to enable "unfiltered" SG_IO access.  This is done in Libvirt
> with sgio='unfiltered', I'm not sure how it is done in RHEV.
When adding (or editing) the disk, you need to check the "Allow Privileged SCSI I/O" checkbox.
Alternatively, this can be done in REST API, by adding a "<sgio>unfiltered</sgio>" element to the disk's specification.

Additionally, in order to do this properly, we'll need bug 1159420.

And, finally, as Paolo mentioned:
> That said, failover clustering on Windows requires the driver to show up as
> a SAS, iSCSI or FC driver.  Currently, vioscsi.sys doesn't do that.  So it's
> possible that Microsoft failover cluster will not work on virtio-scsi even
> if sg_persist working on Linux.


(In reply to Pablo Iranzo Gómez from comment #6)
> Will then we need to file another RFE for vioscsi.sys ? or can be worked as
> part of this RFE?
Let's keep this RFE as a tracker
Could you please file the vioscsi.sys RFE? I'm sure you could describe it much better than me.

Comment 9 Pablo Iranzo Gómez 2015-02-23 08:33:38 UTC
Hi Allon,
I've created bz 1195140 and set this as blocker for that BZ for tracking it.

Regards,
Pablo

Comment 11 Paolo Bonzini 2015-03-05 13:16:21 UTC
The two blocking bugs look good.

Comment 16 Michal Skrivanek 2016-09-07 12:35:21 UTC
no action on RHEV side
Implemented in virtio-scsi bug 1219841
see https://bugzilla.redhat.com/attachment.cgi?id=1168707 for documentation guidelines for MS Windows on how to properly set it up

Comment 18 Marian Jankular 2016-09-17 09:29:53 UTC
Is there a chance that fix will be backported to 3.6 as well?

Comment 24 Jiri Belka 2017-01-02 09:25:16 UTC
Please provide verification steps, ie. at least what should be shared disk direct-lun settings (see https://bugzilla.redhat.com/show_bug.cgi?id=1195140#c54 about shared disk settings).

Comment 25 Michal Skrivanek 2017-01-02 10:09:24 UTC
AFAIK it should work for direct LUNs. If you can reproduce it working with instructions for qemu cmdline from the linked bugs, and doesn't work in RHV UI please attach both command lines

Comment 29 Yaniv Kaul 2017-01-03 12:53:09 UTC
Raz, is SCSI reservation (Linux) working?

Comment 32 Jiri Belka 2017-01-06 12:41:16 UTC
Michal, there's something wrong with storage. SCSI reservation works when direct-lun is propaged to Windows guest via qemu's libiscsi initiator but it fails when direct-lun is attached to Windows guest via oVirt (ie. iscsi initiator on host -> DM -> block device as qemu's disk).

Somebody with better storage skills is needed.

Comment 33 Yaniv Lavi 2017-01-09 15:55:48 UTC
*** Bug 1111783 has been marked as a duplicate of this bug. ***

Comment 34 Tal Nisan 2017-01-09 16:05:42 UTC
Ala, you were incharge of the iSCSI reservation feature, can you have a look please?

Comment 35 Ala Hino 2017-01-10 21:00:00 UTC
I worked on preventing VMs from migrating is scsi-reservation is checked. Unfortunately, I am not familiar with scsi-reservation internals.

Comment 36 Ala Hino 2017-01-10 21:03:45 UTC
Restoring needinfo from pvilayat

Comment 37 Ala Hino 2017-01-10 21:04:34 UTC
Restoring needinfo from rtamir

Comment 38 Gil Klein 2017-01-11 16:20:50 UTC
Moving back to ASSIGNED based on comment #26

Comment 39 Yaniv Kaul 2017-01-11 16:32:32 UTC
Gil, do we see, with the same settings, only with Linux VM, that SCSI reservation work? 

Did we consult with virt QE?

Comment 40 Gil Klein 2017-01-11 17:52:13 UTC
(In reply to Yaniv Kaul from comment #39)
> Gil, do we see, with the same settings, only with Linux VM, that SCSI
> reservation work? 
> 
> Did we consult with virt QE?
I dropped them an email to find the right contact. Will refer them to the BZ as soon I find them.

Comment 41 yisun 2017-01-12 09:04:03 UTC
Hi Yaniv,
I'm libvirt qe and working on Comment 39, I'm not quite familiar with the scsi reservation checkpoint. Could you pls confirm what needs to be tested in a linux vm?

Here is what I plan to test:
1. prepare a vm with disk xml as:
 <disk type='block' device='lun' ***sgio='unfiltered'*** snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/36001405dfd9dce356534a7e9d4f65a27'/>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <shareable/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

2. start the vm
3. check scsi reservation with vm's sdb
For step 3, can you provide the expected checkpoints?
Is something like following enough?
===
sg_persist --out --register --param-sark=123abc /dev/sdb
&
sg_persist -r /dev/sdb
===

thx

Comment 42 Jiri Belka 2017-01-12 09:36:08 UTC
(In reply to yisun from comment #41)
> Hi Yaniv,
> I'm libvirt qe and working on Comment 39, I'm not quite familiar with the
> scsi reservation checkpoint. Could you pls confirm what needs to be tested
> in a linux vm?
> 
> Here is what I plan to test:
> 1. prepare a vm with disk xml as:
>  <disk type='block' device='lun' ***sgio='unfiltered'*** snapshot='no'>
>       <driver name='qemu' type='raw' cache='none' error_policy='stop'
> io='native'/>
>       <source dev='/dev/mapper/36001405dfd9dce356534a7e9d4f65a27'/>
>       <backingStore/>
>       <target dev='sdb' bus='scsi'/>
>       <shareable/>
>       <alias name='scsi0-0-0-1'/>
>       <address type='drive' controller='0' bus='0' target='0' unit='1'/>
>     </disk>
> 
> 2. start the vm
> 3. check scsi reservation with vm's sdb
> For step 3, can you provide the expected checkpoints?
> Is something like following enough?
> ===
> sg_persist --out --register --param-sark=123abc /dev/sdb
> &
> sg_persist -r /dev/sdb
> ===
> 
> thx

declaring <shareable/> would need to test it with two VMs, both sharing same lun. Try to do scsi reservation on one VMs, read it on other one, de-reserve on first one, acquire reservation on the 2nd then, read on first one, de-reserve on the second VM, reserve on first one, take-over reservation on 2nd VM etc...

(Although I'm not very familiar how scsi protocol works but I suppose MS Failover Cluster tries to take-over scsi reservation of "failed" node to have exclusive access to shared disk.)

Comment 43 Raz Tamir 2017-01-12 12:53:15 UTC
Hi Yaniv,
Only now I saw the needinfo for rtamir instead of ratamir - the reason I didn't answer the previous questions.

SCSI reservation functionality is working for linux based VM.
Also tested in our automation with sgio='unfiltered'

Comment 44 Jiri Belka 2017-01-12 13:57:18 UTC
(In reply to Raz Tamir from comment #43)
> Hi Yaniv,
> Only now I saw the needinfo for rtamir instead of ratamir - the reason I
> didn't answer the previous questions.
> 
> SCSI reservation functionality is working for linux based VM.
> Also tested in our automation with sgio='unfiltered'

rpms version, test flow description?

Comment 45 Raz Tamir 2017-01-12 14:57:33 UTC
ovirt-engine-4.1.0-0.4.master.20170111000229.git9ce0636.el7.centos.noarch
vdsm-4.19.1-24.git7747cad.el7.centos.x86_64
vdsm-xmlrpc-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-yajsonrpc-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-hook-ethtool-options-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-jsonrpc-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-cli-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-python-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-api-4.19.1-24.git7747cad.el7.centos.noarch
vdsm-hook-vmfex-dev-4.19.1-24.git7747cad.el7.centos.noarch

libvirt-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-network-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.2.x86_64
libvirt-client-2.0.0-10.el7_3.2.x86_64
libvirt-python-2.0.0-2.el7.x86_64
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-kvm-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-lxc-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-secret-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-config-network-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-interface-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-driver-storage-2.0.0-10.el7_3.2.x86_64
libvirt-daemon-2.0.0-10.el7_3.2.x86_64
libvirt-lock-sanlock-2.0.0-10.el7_3.2.x86_64

qemu-kvm-common-rhev-2.6.0-28.el7_3.3.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64
qemu-kvm-tools-rhev-2.6.0-28.el7_3.3.x86_64
qemu-kvm-tools-ev-2.6.0-27.1.el7.x86_64


1) I created a VM with disk + OS installed and attached a dirct-lun to it and enabled:

(screenshot attached)
- Enable SCSI Pass-Through
  - Allow Privileged SCSI I/O
  - Using SCSI Reservation  

2) Start the VM and ensure everything is working - create FS on the new device
3) Tried to migrate the VM to new host - failed - expected

Comment 46 Raz Tamir 2017-01-12 14:58:45 UTC
Created attachment 1239981 [details]
screenshot - scsi reservation

Comment 47 Yaniv Kaul 2017-01-12 15:09:01 UTC
Raz:
but does SCSI reservation actually work? (see comment 41 for a basic test)

Comment 48 Jiri Belka 2017-01-12 15:20:20 UTC
IMO the confusion origins in unclear option name 'Using SCSI reservation'.

Some (although EL5) info https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_Fence_Devices/SCSI_Configuration.html#SCSI_tech_overview

SCSI reservation is used in clusters which use (share) same storage not to write both to it. When there's a failed node, active node fences (eg. disabled other node's access to disk) and takes storage over.

Comment 49 Yaniv Kaul 2017-01-12 15:30:31 UTC
(In reply to Jiri Belka from comment #48)
> IMO the confusion origins in unclear option name 'Using SCSI reservation'.
> 
> Some (although EL5) info
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/
> html/Configuration_Example_-_Fence_Devices/SCSI_Configuration.
> html#SCSI_tech_overview
> 
> SCSI reservation is used in clusters which use (share) same storage not to
> write both to it. When there's a failed node, active node fences (eg.
> disabled other node's access to disk) and takes storage over.

I don't see the confusion - this is exactly the feature.

Comment 50 Raz Tamir 2017-01-12 17:06:04 UTC
Michal,
Can you please advise how to check this on Linux?

Seems like this subject is unclear.

Following comment #41 - the setup I have:
- 2 VMs with 1 disk each + OS
- Added 1 direct lun, shared, scsi reservation enabled
- Created a FS on the direct lun and mounted it on both VMs

- $ sg_persist --out --register --param-sark=123abc /dev/sdb
  XtremIO   XtremApp          4020
  Peripheral device type: disk
  PR out: aborted command

- $ sg_persist -r /dev/sdb
  XtremIO   XtremApp          4020
  Peripheral device type: disk
  PR in (Read reservation): aborted command

Comment 51 Tomas Jelinek 2017-01-16 11:15:14 UTC
@Tal, do you know if someone could assist here?

Comment 52 Tal Nisan 2017-01-16 21:48:47 UTC
In the actual testing? Not sure, this has to be investigated, as Ala stated the feature was basically making sure a property passes between the API and VDSM

Comment 54 Peixiu Hou 2017-01-17 07:53:43 UTC
Created attachment 1241567 [details]
Validate SCSI-3 Persistent Reservation Failed

Comment 56 Tomas Jelinek 2017-01-18 14:38:14 UTC
After a discussion with Tal assigning to storage for further investigation.

Comment 62 Jiri Belka 2017-01-27 23:21:21 UTC
I was using wireshark and I did see:

- (libiscsi qemu initiator) I saw SCSI command for 'PERSISTENT RESERVE IN (5E)'
  and 'PERSISTENT RESERVE OUT (5F)'

$ tshark -r iscsi-libiscsi.pcap -Y "scsi" -T fields -e frame.number -e ip.addr -e _ws.col.Info | grep "Persistent" | tail -n2     
16050   10.34.63.223,10.34.63.200       SCSI: Persistent Reserve Out LUN: 0x05 SCSI: Data Out LUN: 0x05 (Persistent Reserve Out Request Data)
16052   10.34.63.200,10.34.63.223       SCSI: Response LUN: 0x05 (Persistent Reserve Out) (Good)

- (direct-lun via OS block device) No SCSI command for 'PERSISTENT RESERVE IN (5E)' and 'PERSISTENT RESERVE OUT (5F)' were seen.

Dumps are in attachments.

Comment 64 Vadim Rozenfeld 2017-01-29 12:08:26 UTC
could you please upload report logs located in systemroot\Cluster\Reports folder on a clustered server?

Thanks,
Vadim.

Comment 66 Jiri Belka 2017-01-30 11:27:46 UTC
Before shared direct-lun disk is attached to two VMs (MSFC cluster):

[root@dell-r210ii-04 ~]# lsscsi -i
[0:0:0:0]    disk    ATA      WDC WD2502ABYS-1 3B05  /dev/sda   -
[4:0:0:0]    cd/dvd  TSSTcorp DVD+-RW TS-L633J D250  /dev/sr0   -
[6:0:0:0]    disk    EMC      Celerra          0002  -          -
[6:0:0:5]    disk    EMC      Celerra          0002  /dev/sdb   36006048ccbaa4f3dc3a845d699f776dd

SGIO filtering enabled, see '0'. (Description https://lkml.org/lkml/2013/1/24/321)

[root@dell-r210ii-04 ~]# find /sys -name 'unpriv_sgio'
/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda/queue/unpriv_sgio
/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/unpriv_sgio
/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sr0/queue/unpriv_sgio
/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/unpriv_sgio
/sys/devices/virtual/block/dm-0/queue/unpriv_sgio
/sys/devices/virtual/block/dm-1/queue/unpriv_sgio
/sys/devices/virtual/block/dm-2/queue/unpriv_sgio
/sys/devices/virtual/block/dm-3/queue/unpriv_sgio
/sys/devices/virtual/block/dm-4/queue/unpriv_sgio
/sys/devices/virtual/block/dm-5/queue/unpriv_sgio
/sys/devices/virtual/block/dm-6/queue/unpriv_sgio
/sys/devices/virtual/block/dm-7/queue/unpriv_sgio
/sys/devices/platform/host6/session1/target6:0:0/6:0:0:0/unpriv_sgio
/sys/devices/platform/host6/session1/target6:0:0/6:0:0:5/block/sdb/queue/unpriv_sgio
/sys/devices/platform/host6/session1/target6:0:0/6:0:0:5/unpriv_sgio
[root@dell-r210ii-04 ~]# find /sys -name 'unpriv_sgio'  |xargs cat
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

After attaching the disk - this is part of domain dump for shared direct-lun:

    <disk type='block' device='lun' sgio='unfiltered' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/36006048ccbaa4f3dc3a845d699f776dd'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <shareable/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

virsh # domblklist 2
Target     Source
------------------------------------------------
hdc        -
vda        /rhev/data-center/583c664e-0091-019d-00ca-0000000000a6/bc12d98c-83c4-46d3-b06b-a9feca15aaaf/images/34cda594-0b31-4d86-930a-23e5e5b36486/8347466a-f0c7-4ae1-9887-74294ec8d838
sda        /dev/mapper/36006048ccbaa4f3dc3a845d699f776dd


[root@dell-r210ii-04 ~]# ls -l /dev/mapper/36006048ccbaa4f3dc3a845d699f776dd
lrwxrwxrwx. 1 root root 7 Jan 30 11:58 /dev/mapper/36006048ccbaa4f3dc3a845d699f776dd -> ../dm-4

[root@dell-r210ii-04 ~]# find /sys -path '*/dm-4/*' -name 'unpriv_sgio' | xargs cat
1

^^ 'unpriv_sgio' is set to 1 for above device. But this is when MSFC validation was always failing.

I tried to put '1' into all iscsi lun related 'unpriv_sgio' paths, still MSFC is failing but __some steps were passed__ in comparison with previous failing validation!!

[root@dell-r210ii-04 ~]# find /sys -path '*/dm-4/*' -name 'unpriv_sgio' | xargs cat
1
[root@dell-r210ii-04 ~]# find /sys -path '*/6:0:0:*/*' -name 'unpriv_sgio' | xargs cat
1
1
1

I'm attaching latest MSFC validation report, if needed for other investigation.

Comment 75 Peixiu Hou 2017-02-16 05:53:58 UTC
Hi all,

Additional MSFC validation tests:

1. EL 7.3 libvirt with "sgio=unfiltered" --Fail

kernel-3.10.0-495.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64
virtio-scsi driver build-124

Failover clustering validation will hit "Validation SCSI-3 Persistent Reservation" failed, and report error "Test Disk does not provide Persisten Reservation support for the mechanisms used by failover clusters". The failed details like as the comment#54 attachment and comment#67 attachment.

--qemu cli---
-drive file=/dev/disk/by-path/ip-10.66.4.129:3260-iscsi-iqn.2016-06.local.server:sas-lun-0,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,aio=threads -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0

2. RHVM 4.1 + RHEL7.3 host  --Fail

kernel-3.10.0-495.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64
virtio-scsi driver build-124
Red Hat Virtualization Manager Version: 4.1.1-0.1.el7
vdsm-4.19.5-1.el7ev.x86_64

Tried follow 3 Direct Lun settings:
1)
X activate
X shareable
X enable scsi pass-through
  X allow privileged scsi i/o
  X using scsi reservation
2)
X activate
X shareable
X enable scsi pass-through
  X allow privileged scsi i/o
3) 
X activate
X shareable
X enable scsi pass-through

all results are same with the upper EL 7.3 libvirt test, Failover clustering validation will hit "Validation SCSI-3 Persistent Reservation" failed, and report error "Test Disk does not provide Persisten Reservation support for the mechanisms used by failover clusters".

The Failover Validation result report pls refer to attachment.

Comment 76 Peixiu Hou 2017-02-16 06:02:58 UTC
Created attachment 1250783 [details]
Failover cluster validation report with rhvm plus rhel7.3

All tried tests in comment#75 have same failed result, so upload a Failover cluster validation report.

Best Regards~
Peixiu

Comment 78 Jiri Belka 2017-02-16 08:28:09 UTC
I wrote somewhere I had this problem while using LIO target on EL7 and EMC VMX.

If I could recommend, please try with other iSCSI targets.

Comment 79 Martin Tessun 2017-02-17 11:40:23 UTC
Hi Jiri,

(In reply to Jiri Belka from comment #78)
> I wrote somewhere I had this problem while using LIO target on EL7 and EMC
> VMX.
> 
> If I could recommend, please try with other iSCSI targets.

Did you explicitely enable S3 PR for the VNX. As far as I know this needs to be explicitely configured and is disabled by default.

Comment 80 Vadim Rozenfeld 2017-02-21 00:27:23 UTC
Created attachment 1255905 [details]
iscsi trace

target_sequencer_start and target_cmd_complete traces obtained when running failover cluster storage validation test on top of iSCSI back-end. Pay attention at line 155 where status RESERVATION CONFLICT returned in response to 
PERSISTENT_RESERVE_OUT command

Comment 81 Vadim Rozenfeld 2017-02-21 00:38:19 UTC
Created attachment 1255907 [details]
direct lun trace

target_sequencer_start and target_cmd_complete traces captured when running failover cluster storage validation test on top of tcm_loop back-end. Target always returns status GOOD in response to 
PERSISTENT_RESERVE_OUT command even when RESERVATION CONFLICT should be returned

Comment 82 Vadim Rozenfeld 2017-02-21 02:32:13 UTC
I must be found the problem in out tcm_loop configuration.
It should be somthing loke this:

o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- fileio ................................................................................................. [Storage Objects: 1]
  | | o- disk01 ............................................. [/home/vrozenfe/work/images/disk01.img (10.0GiB) write-back activated]


  o- loopback ......................................................................................................... [Targets: 2]
  | o- naa.50014050c43bf79e ................................................................................. [naa.50014055aa9457b5]
  | | o- luns ............................................................................................................ [LUNs: 2]
  | |   o- lun0 ............................................................ [fileio/disk01 (/home/vrozenfe/work/images/disk01.img)]
  | o- naa.50014050c43bf79f ................................................................................. [naa.50014052fb5d5b75]
  |   o- luns ............................................................................................................ [LUNs: 1]
  |     o- lun0 ............................................................ [fileio/disk01 (/home/vrozenfe/work/images/disk01.img)]


As you can see I created two different loopbacks backed up by the same backstore

then we can see two different luns

[vrozenfe@jack ~]$ lsblk --scsi
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sde  5:0:1:0    disk LIO-ORG  disk01           4.0  
sdc  4:0:1:0    disk LIO-ORG  disk01           4.0  
sda  0:0:0:0    disk ATA      WDC WD5000LPLX-7 1A01 sata

then if I define
DISC='/dev/sde'

-drive file=$DISC,if=none,media=disk,format=raw,rerror=stop,werror=stop,readonly=off,aio=threads,cache=none,cache.direct=on,id=drive-hotadd,serial=sas-test -device virtio-scsi-pci,id=scsi-hotadd,num_queues=1 -device scsi-block,drive=drive-hotadd,id=hotadd,bus=scsi-hotadd.0,bootindex=2 

and 
DISC='/dev/sde'

for another node, everything seems to be working fine me

Comment 83 Martin Tessun 2017-02-21 09:00:42 UTC
Hi Vadim,

(In reply to Vadim Rozenfeld from comment #82)
> I must be found the problem in out tcm_loop configuration.
> It should be somthing loke this:
> 
> 
> As you can see I created two different loopbacks backed up by the same
> backstore

Ok. So we either have the issue in iscsid (the initiator) or in the LIO target implementation, as you used loopback devices, and as such did not need to use iscsid for presenting the disks to your system. Correct?

> 
> then we can see two different luns
> 
> [vrozenfe@jack ~]$ lsblk --scsi
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
> sde  5:0:1:0    disk LIO-ORG  disk01           4.0  
> sdc  4:0:1:0    disk LIO-ORG  disk01           4.0  
> sda  0:0:0:0    disk ATA      WDC WD5000LPLX-7 1A01 sata
> 
> then if I define
> DISC='/dev/sde'
> 
> -drive
> file=$DISC,if=none,media=disk,format=raw,rerror=stop,werror=stop,
> readonly=off,aio=threads,cache=none,cache.direct=on,id=drive-hotadd,
> serial=sas-test -device virtio-scsi-pci,id=scsi-hotadd,num_queues=1 -device
> scsi-block,drive=drive-hotadd,id=hotadd,bus=scsi-hotadd.0,bootindex=2 
> 
> and 
> DISC='/dev/sde'
> 
> for another node, everything seems to be working fine me

So you did test the reservation with this one disk on two different VMs and it worked well. Did I understand this correctly?

If so, we still have three possible root-causes:

1. LIO Target
2. iscsid
3. multipath

Do you agree?
Thanks!
Martin

Comment 84 Andy Grover 2017-02-21 18:07:42 UTC
Vadim,

Yeah, if you just use one loopback then it looks like all reservations are coming from a single initiator, and since identical re-reservations are allowed, they all succeed. Solution as you found: use more than one.

In the iscsi case, LIO can tell requests are from different initiators because of different (initiator iqn, session_id).

Comment 85 Vadim Rozenfeld 2017-02-22 01:58:22 UTC
(In reply to Andy Grover from comment #84)
> Vadim,
> 
> Yeah, if you just use one loopback then it looks like all reservations are
> coming from a single initiator, and since identical re-reservations are
> allowed, they all succeed. Solution as you found: use more than one.
> 
> In the iscsi case, LIO can tell requests are from different initiators
> because of different (initiator iqn, session_id).

Thanks a lot for your explanation.
It was my understanding too that for tcm_loop we need to create a dedicated
LUN/initiator for every VM/node, just like in iSCSI case.

Thank you again.
Vadim.

Comment 89 Peixiu Hou 2017-02-23 11:44:40 UTC
Hi Martin,

We tried to msfc validation test with the RHV env plus rhel7.3:

1. For "LIO target + iscsid connection and no multipathing" test , tested
it as this bug comment
https://bugzilla.redhat.com/show_bug.cgi?id=1111784#c75, it's failed.

2. For "<Netapp> target + iscsid connection multipathing", It's failed also. Failed with error "Failed while verifying removal of any Persistent Reservation on physical disk bb8cd9eb at node test1.msfc.com.".
Details pls refer to the attached report.

Direct Lun disk with the
parameter as follow:
X shareable
X enable scsi pass-through
   X allow privileged scsi i/o
   X using scsi reservation

3. For "<Vendor> target + iscsid connection and no multipathing", the test env have some problems now, will need some time to solve. We'll continue to test it, will update the result to this bug asap.


Best Regards~
Peixiu hou

Comment 90 Peixiu Hou 2017-02-23 11:49:48 UTC
Created attachment 1256881 [details]
Failover cluster validation report 2017_2_23

Comment 91 Martin Tessun 2017-02-23 13:31:44 UTC
Ok. So to sum up:

LIO+Loopback device:                     Working
LIO+iscsid device no multipathing:       Not Working
<NetApp>+iscsid device and multipathing: Not Working

Also according to https://bugzilla.redhat.com/show_bug.cgi?id=1195140#c69 it is working with a local SCSI disk on the host (instead of an iSCSI attached one) as well.

With that said, it looks like a bug in iscsid to me.
So probably we should open a bug on iscsid for solving this.

Could we verify that the config is working with Fibrechannel disks (which I would expect from the current results).

Comment 92 Miya Chen 2017-02-24 06:00:54 UTC
(In reply to Martin Tessun from comment #91)
> Ok. So to sum up:
> 
> LIO+Loopback device:                     Working
> LIO+iscsid device no multipathing:       Not Working
> <NetApp>+iscsid device and multipathing: Not Working
> 
> Also according to https://bugzilla.redhat.com/show_bug.cgi?id=1195140#c69 it
> is working with a local SCSI disk on the host (instead of an iSCSI attached
> one) as well.

Peixiu, could you please confirm that in above test of bz#1195140, you used local SCSI disk?

> 
> With that said, it looks like a bug in iscsid to me.
> So probably we should open a bug on iscsid for solving this.
> 
> Could we verify that the config is working with Fibrechannel disks (which I
> would expect from the current results).

Peixiu is working on it, since all the env are borrowed from rhvh qe team, we need to re-set all the things(host install+npiv setup+guest) and learn to use rhvm, it still needs sometime to finish it.

Comment 93 Peixiu Hou 2017-02-24 06:27:54 UTC
(In reply to Miya Chen from comment #92)
> (In reply to Martin Tessun from comment #91)
> > Ok. So to sum up:
> > 
> > LIO+Loopback device:                     Working
> > LIO+iscsid device no multipathing:       Not Working
> > <NetApp>+iscsid device and multipathing: Not Working
> > 
> > Also according to https://bugzilla.redhat.com/show_bug.cgi?id=1195140#c69 it
> > is working with a local SCSI disk on the host (instead of an iSCSI attached
> > one) as well.
> 
> Peixiu, could you please confirm that in above test of bz#1195140, you used
> local SCSI disk?


For bz#1195140#69 comment, I tested pass with the LIO + qemu iscsi initiator, it uses the way that passtrough the LIO iscsi target lun to 2 vms directly, don't need login the iscsi target to host. Detail qemu command as "-drive file=iscsi://10.66.4.129/iqn.2016-06.local.server:sas/0,if=none,media=disk,format=raw,rerror=stop,werror=stop,readonly=off,aio=threads,cache=none,cache.direct=on,id=drive-hotadd,serial=sas-test -device scsi-block,drive=drive-hotadd,bus=scsi-hotadd.0". It isn't a local SCSI disk test. 

And for LIO+iscsid device test, it need login the LIO iscsi target to the host firstly, I tried with qemu command directly and RHEVM way, both failed.


Best Regards~
Peixiu Hou

Comment 94 Martin Tessun 2017-02-24 08:45:51 UTC
OK. So it looks like the culprit here is in iscsid.
Please correct me if I am wrong:

1. LIO target with qemu initiator             :     Working
2. LIO target with iscsid and no multipathing : Not Working
3. <NetApp>+iscsid device and multipathing    : Not Working
4. LIO Target + Loopback initialisation       :     Working

So I believe we need to have a  FC + multipathing test to rule out that there is an additional issue with multipathing (which can currently not be tested well with iscsid due to the issue in this component).

The only different component in the "not working" tests is iscsid.
Anything I missed?

Comment 96 Peixiu Hou 2017-02-26 07:41:24 UTC
Created attachment 1257785 [details]
vm paused after attached fiberchannel direct lun with SCSI passthrough enable

Comment 106 Peixiu Hou 2017-03-02 11:23:46 UTC
Tried to test upper mentioned Scenario from qemu level. Login the same iscsi target lun to 2 different hosts, then each cluster VMs is running on a different host, the MSFC validation test can be passed. 

qemu command line:
-drive file=/dev/sdb,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0


Best Regards~
Peixiu Hou

Comment 107 Peixiu Hou 2017-03-02 11:56:45 UTC
Created attachment 1259111 [details]
cannot choose the same iscsi target to login to another host

For RHEV-M level, we try to login the same iscsi target lun to 2 different hosts, we can through create Direct Lun disk page to login a iscsi target to one host, but we cannot login the same iscsi target to another host use this way. We configured "1 Data_Center + 1 Cluster + 2 hosts" and "2 Data_Centers + 2 Clusters + 2 hosts" to try it, hit the same problem, here is a picture for that. 

Best Regards~
Peixiu Hou

Comment 108 Martin Tessun 2017-03-02 14:40:47 UTC
Hi Pei,

so I understand corretcly that using 2 differnt initiators on the host and attaching the 2 seen disks to 2 different VMS that setup does pass?

Trying some ascii art:



+--------------+      +-------------+
|  HOST        |      |             |
|    vm1-init1-+------+             |
|              |      |   TARGET 1  |
|    vm2-init2-+------+             |
|              |      |             |
+--------------+      +-------------+

So for the RHV test we probably need having two differnt hosts running the VMs then.

This said, I think that there is no bug in qemu, but we could file an RFE incase we want to correctly handle S3-PR within guests on the same host (and the same target)

Any further thoughts?

Comment 119 Peixiu Hou 2017-03-14 09:08:32 UTC
(In reply to Mike Christie from comment #112)
> (In reply to Peixiu Hou from comment #109)
> > And it seems that only one initiator can exist on one host.
> > The iscsid login command as:
> > iscsiadm -m discovery -t sendtargets -p 10.66.4.129:3260
> > iscsiadm -m node -T iqn.2016-06.local.server:sas -p 10.66.4.129 --login
> > 
> 
> You can create multiple initiator instances on a host by creating a iscsi
> iface with that sets the initiator name.
> 
> iscsiadm -m iface -o new -I iface1
> iscsiadm -m iface -I iface1 -o update -n iface.initiatorname -v
> iqn.myinitiatorname1
> 
> iscsiadm -m iface -o new -I iface2
> iscsiadm -m iface -I iface2 -o update -n iface.initiatorname -v
> iqn.myinitiatorname2
> 
> 
> iscsiadm -m discovery -t sendtargets -p 10.66.4.129:3260 -I iface1 -I face2
> 
> iscsiadm -node -T iqn.2016-06.local.server:sas -p 10.66.4.129 --login
> 
> will then login to the target through both ifaces. You can also pass in
> specific ifaces to login through using the -I argument.
> 
> You can also use a single initiator and login multiple sessions to the same
> target portal, and the initiator will use different iSIDs for each session
> so it will look like different initiator ports which will get you the same
> result.
> 
> 
> So using the default initiatorname/iface do
> 
> iscsiadm -m discovery -t sendtargets -p 10.66.4.129:3260 
> 
> then 
> 
> iscsiadm -node -T iqn.2016-06.local.server:sas -p 10.66.4.129 --login
> 
> and do it again with -o new to create a second session. You can do this
> multiple times to create as many sessions as you need.
> 
> iscsiadm -node -T iqn.2016-06.local.server:sas -p 10.66.4.129 --login -o new


Thanks a lot, Mike, I tried to run the msfc validation test on one host with 2 initiator instances, it can be passed.

The env configuration step as follows:
1.On the host:
#iscsiadm -m iface -o new -I iface1
#iscsiadm -m iface -I iface1 -o update -n iface.initiatorname -v iqn.1994-05.com.redhat:myinitiatorname1
#iscsiadm -m iface -o new -I iface2
#iscsiadm -m iface -I iface2 -o update -n iface.initiatorname -v iqn.1994-05.com.redhat:myinitiatorname2
#iscsiadm -m discovery -t sendtargets -p 10.66.4.129:3260 -I iface1 -I iface2
10.66.4.129:3260,1 iqn.2016-06.local.server:sas
10.66.4.129:3260,1 iqn.2016-06.local.server:sas

2. Add new created initiator name to LIO iscsi target acls list:
 /iscsi/iqn.20...sas/tpg1/acls> ls
o- acls .................................................................................... [ACLs: 6]
  o- iqn.1994-05.com.redhat:myinitiatorname1 ........................................ [Mapped LUNs: 1]
  | o- mapped_lun0 .......................................................... [lun0 fileio/disk0 (rw)]
  o- iqn.1994-05.com.redhat:myinitiatorname2 ........................................ [Mapped LUNs: 1]
  | o- mapped_lun0 .......................................................... [lun0 fileio/disk0 (rw)]

3. #iscsiadm -node -T iqn.2016-06.local.server:sas -p 10.66.4.129 --login
Logging in to [iface: default, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] (multiple)
Logging in to [iface: iface1, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] (multiple)
Logging in to [iface: iface2, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] (multiple)
Login to [iface: default, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] successful.
Login to [iface: iface1, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] successful.
Login to [iface: iface2, target: iqn.2016-06.local.server:sas, portal: 10.66.4.129,3260] successful.

4. #lsscsi
[0:0:0:0]    disk    SEAGATE  ST9500430SS      DS62  -        
[0:0:1:0]    disk    SEAGATE  ST9500430SS      DS62  -        
[0:1:0:0]    disk    Dell     VIRTUAL DISK     1028  /dev/sda 
[20:0:0:0]   disk    LIO-ORG  disk0            4.0   /dev/sdb 
[21:0:0:0]   disk    LIO-ORG  disk0            4.0   /dev/sdd 
[22:0:0:0]   disk    LIO-ORG  disk0            4.0   /dev/sdc 

5. Create 2 VMs with /dev/sdc & /dev/sdd passthrough:
qemu cli for VM1:
-drive file=/dev/sdc,format=raw,if=none,id=drive-scsi0-0-0-2,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-0-2,id=scsi0-0-0-2
qemu cli for VM2:
-drive file=/dev/sdd,format=raw,if=none,id=drive-scsi0-0-0-2,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-0-2,id=scsi0-0-0-2

6. MSFC validation configuration and test.

Best Regards~
Peixiu

Comment 128 lijin 2017-03-24 02:43:42 UTC
Hi Jiri,

Could you please share us how to login a same iscsi target to 2 different hosts in rhevm GUI? any detail steps or doc can be provided? Thanks a lot~


Best Regards~
Peixiu Hou

Comment 129 Jiri Belka 2017-03-24 09:20:17 UTC
(In reply to lijin from comment #128)
> Hi Jiri,
> 
> Could you please share us how to login a same iscsi target to 2 different
> hosts in rhevm GUI? any detail steps or doc can be provided? Thanks a lot~
> 
> 
> Best Regards~
> Peixiu Hou

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1-beta/html-single/administration_guide/#sect-Preparing_and_Adding_Block_Storage

do not find in it anything magic, you add it via Administration Portal and it will be attached to all hosts in the background.

Comment 130 Peixiu Hou 2017-03-24 09:57:51 UTC
MSFC validation test results summary:
1.From qemu level:
-------------------------------------------------------------------------------
Scenario               Test from qemu level    MSFC Validation failed step in

LIO + qemu initiator         Passed                         Passed

LIO + iscsid(1 initiator)    Failed            Failed at 'Validate SCSI-3 PR'

LIO + iscsid(2 initiators)   Passed                         Passed

IBM + FC + multipath         Failed            Failed at 'Validate SCSI-3 PR'

Used versions:
kernel-3.10.0-514.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.6.x86_64
virtio-win-scsi-build 124

2.From RHEV-M level:
-------------------------------------------------------------------------------
Scenario               Test from RHEV-M level  MSFC Validation failed step in

LIO + qemu initiator         no support                     none

LIO + iscsid(1 initiator)    Failed              'List Disks To Be Validated'

Netapp+iSCSI(HBA)+multipath  Failed              'List Disks To Be Validated'

IBM + FC(HBA) + multipath    Failed              'List Disks To Be Validated'

Used versions:
vdsm-4.19.6-1.el7ev.x86_64
rhevm-4.1.1.2-0.1.el7.noarch
kernel-3.10.0-514.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.6.x86_64
virtio-win-scsi-build 124

For failed at 'Validate SCSI-3 PR' detail, pls refer to follow report:
https://bugzilla.redhat.com/attachment.cgi?id=1250783

For failed at 'List Disks To Be Validated', pls refer to follow report:
https://bugzilla.redhat.com/attachment.cgi?id=1256881

And next, we'll try to test the scenario LIO + iscsid(2 initiators) from RHEV-M level, any result will update to here.

Best Regards~
Peixiu Hou

Comment 132 Jiri Belka 2017-03-24 13:25:42 UTC
I just got Windows 2016 VM stucked when creating volume on shared disk - https://bugzilla.redhat.com/show_bug.cgi?id=1435660

...
PS C:\Users\ad-w2k12r2> Get-WmiObject Win32_PnPSignedDriver| select devicename, driverversion, driverdate | where {$_.devicename -like "*scsi*" }

devicename                                  driverversion   driverdate
----------                                  -------------   ----------
Red Hat VirtIO SCSI controller              62.73.104.12600 20160811000000.******+***
Red Hat VirtIO SCSI pass-through controller 62.73.104.12400 20160729000000.******+***
iscsi

main_channel_handle_parsed: net test: latency 23.343000 ms, bitrate 4249112 bps (4.052269 Mbps) LOW BANDWIDTH
red_dispatcher_set_cursor_peer: 
inputs_connect: inputs channel client create
main_channel_handle_parsed: agent start
2017-03-24T11:24:42.359200Z qemu-kvm: virtio-serial-bus: Unexpected port id 736681576 for device virtio-serial0.0
2017-03-24T11:24:45.382932Z qemu-kvm: virtio-serial-bus: Unexpected port id 1185792 for device virtio-serial0.0
2017-03-24T11:25:06.623340Z qemu-kvm: virtio-serial-bus: Unexpected port id 3528894800 for device virtio-serial0.0
2017-03-24T11:25:06.624055Z qemu-kvm: virtio-serial-bus: Unexpected port id 739323064 for device virtio-serial0.0
2017-03-24T11:25:14.019747Z qemu-kvm: virtio-serial-bus: Unexpected port id 3528894800 for device virtio-serial0.0
2017-03-24T11:25:57.144946Z qemu-kvm: virtio-serial-bus: Unexpected port id 3491010928 for device virtio-serial0.0
2017-03-24T11:27:26.876178Z qemu-kvm: virtio-serial-bus: Unexpected port id 0 for device virtio-serial0.0
2017-03-24T11:27:29.913275Z qemu-kvm: virtio-serial-bus: Unexpected port id 3552624224 for device virtio-serial0.0
2017-03-24T11:27:32.784350Z qemu-kvm: virtio-serial-bus: Unexpected port id 739049208 for device virtio-serial0.0
2017-03-24T11:27:34.025448Z qemu-kvm: virtio-serial-bus: Guest failure in adding device virtio-serial0.0
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
red_channel_client_disconnect: rcc=0x7fa34bdca000 (channel=0x7fa349cccc00 type=2 id=0)
red_channel_client_disconnect: rcc=0x7fa34aacd000 (channel=0x7fa349d71080 type=4 id=0)
red_channel_client_disconnect: rcc=0x7fa34aabe000 (channel=0x7fa349d86000 type=3 id=0)
red_channel_client_disconnect: rcc=0x7fa34ab1a000 (channel=0x7fa349d7e000 type=1 id=0)
main_channel_client_on_disconnect: rcc=0x7fa34ab1a000
red_client_destroy: destroy client 0x7fa34dcbbb80 with #channels=4
red_dispatcher_disconnect_cursor_peer: 
red_dispatcher_disconnect_display_peer: 
...

Comment 134 Peixiu Hou 2017-03-28 11:28:45 UTC
Follows are my latest test results(I used the same env to complete these tests):

1. rhv4.1 + rhel7.3 + 2 hosts + Direct Lun disk  --Failed at "List Disk To Be Validated" step. Report error message "Failed while verifying removal of any Persistent Reservation on physical disk bb8cd9eb at node test1.msfc.com."

Create a new Direct Lun disk, attach it to each vm with follow settings:
X Shareable
X Enable SCSI Pass-Through
  X Allow Privileged SCSI I/O
  X Using SCSI Reservation
 
qemu command line:
vm1: 
/usr/libexec/qemu-kvm -name guest=peixiu_vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-peixiu_vm1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 5e67ba8f-5056-4b09-b286-7785b21a391b -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.3-7.el7,serial=34353737-3035-4E43-3735-323530375A31,uuid=5e67ba8f-5056-4b09-b286-7785b21a391b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-peixiu_vm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-28T09:42:23,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/48852d3b-4bd1-4bfc-9d59-762c3470aba4/e4a3c8e8-fc19-4f8d-aed8-b4ce1f5541ec,format=raw,if=none,id=drive-ide0-0-0,serial=48852d3b-4bd1-4bfc-9d59-762c3470aba4,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/rhev/data-center/mnt/10.66.4.243:_home_iso-test/abb85038-c616-4667-8e99-b29c345e3bd3/images/11111111-1111-1111-1111-111111111111/en_windows_server_2016_x64_dvd_9327751.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=31,id=hostnet0,vhost=on,vhostfd=33 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/5e67ba8f-5056-4b09-b286-7785b21a391b.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/5e67ba8f-5056-4b09-b286-7785b21a391b.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=10.73.72.144,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on
vm2:
/usr/libexec/qemu-kvm -name guest=peixiu_vm2,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-peixiu_vm2/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem,+vmx -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.3-4.el7,serial=4C4C4544-0036-4410-804D-B1C04F543258,uuid=d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-peixiu_vm2/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-28T09:42:51,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/e09ee7e5-1d0a-48d7-bd07-e7087cfe3635/f16e06b7-e533-4a2c-8e27-b7918137782b,format=raw,if=none,id=drive-ide0-0-0,serial=e09ee7e5-1d0a-48d7-bd07-e7087cfe3635,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/rhev/data-center/mnt/10.66.4.243:_home_iso-test/abb85038-c616-4667-8e99-b29c345e3bd3/images/11111111-1111-1111-1111-111111111111/en_windows_server_2016_x64_dvd_9327751.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:52,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d58a72a1-6f72-4a8f-8578-c80dd9cf04dc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d58a72a1-6f72-4a8f-8578-c80dd9cf04dc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.73.72.10,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on

2. qemu directly + rhel7.3 + 2 hosts + /dev/sd* directly --Passed

Note: the action that iscsi target login to 2 hosts is completed through rhevm UI, and after login successfully, /dev/sd* will list in "fdisk -l".

qemu command line:
vm1: 
/usr/libexec/qemu-kvm -name guest=peixiu_vm1,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 5e67ba8f-5056-4b09-b286-7785b21a391b -rtc base=2017-03-27T09:43:00,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/48852d3b-4bd1-4bfc-9d59-762c3470aba4/e4a3c8e8-fc19-4f8d-aed8-b4ce1f5541ec,format=raw,if=none,id=drive-ide0-0-0,serial=48852d3b-4bd1-4bfc-9d59-762c3470aba4,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/dev/sdk,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -vnc :1 -monitor stdio
vm2: 
/usr/libexec/qemu-kvm -name guest=peixiu_vm2,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -rtc base=2017-03-28T02:22:38,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/e09ee7e5-1d0a-48d7-bd07-e7087cfe3635/f16e06b7-e533-4a2c-8e27-b7918137782b,format=raw,if=none,id=drive-ide0-0-0,serial=e09ee7e5-1d0a-48d7-bd07-e7087cfe3635,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/dev/sdc,format=raw,if=none,id=drive-scsi0-0-0-1,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:52,bus=pci.0,addr=0x3 -vnc :1 -monitor stdio

3. qemu directly + rhel7.3 + 2 hosts + /dev/mapper/3600145dbd15be0faff435e8b480d4e2           --Passed

Note: the action that iscsi target login to 2 hosts is completed through rhevm UI, and after login successfully, /dev/mapper/3600145dbd15be0faff435e8b480d4e2  will list in "fdisk -l".

qemu command line:
VM1:
/usr/libexec/qemu-kvm -name guest=peixiu_vm1,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 5e67ba8f-5056-4b09-b286-7785b21a391b -rtc base=2017-03-27T09:43:00,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/48852d3b-4bd1-4bfc-9d59-762c3470aba4/e4a3c8e8-fc19-4f8d-aed8-b4ce1f5541ec,format=raw,if=none,id=drive-ide0-0-0,serial=48852d3b-4bd1-4bfc-9d59-762c3470aba4,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -vnc :1 -monitor stdio
VM2:
/usr/libexec/qemu-kvm -name guest=peixiu_vm2,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -rtc base=2017-03-28T02:22:38,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/e09ee7e5-1d0a-48d7-bd07-e7087cfe3635/f16e06b7-e533-4a2c-8e27-b7918137782b,format=raw,if=none,id=drive-ide0-0-0,serial=e09ee7e5-1d0a-48d7-bd07-e7087cfe3635,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-1,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:52,bus=pci.0,addr=0x3 -vnc :1 -monitor stdio

4. qemu directly(the command line is closer to rhv used) + rhel7.3 + 2 hosts + /dev/mapper/3600145dbd15be0faff435e8b480d4e2   --passed

Difference between qemu directly used with rhv used:
1) deleted "-S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-peixiu_vm1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off"
2) deleted "-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-peixiu_vm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control"
3). change "-netdev tap,fd=31,id=hostnet0,vhost=on,vhostfd=33" to "-netdev tap,id=hostnet0"
4). deleted "-spice tls-port=5901,addr=10.73.72.144,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on"
5). add "-vnc :1 -monitor stdio"

qemu detail command line:
VM1:
/usr/libexec/qemu-kvm -name guest=peixiu_vm2,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem,+vmx -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.3-4.el7,serial=4C4C4544-0036-4410-804D-B1C04F543258,uuid=d58a72a1-6f72-4a8f-8578-c80dd9cf04dc -no-user-config -nodefaults -rtc base=2017-03-28T09:42:51,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/e09ee7e5-1d0a-48d7-bd07-e7087cfe3635/f16e06b7-e533-4a2c-8e27-b7918137782b,format=raw,if=none,id=drive-ide0-0-0,serial=e09ee7e5-1d0a-48d7-bd07-e7087cfe3635,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/rhev/data-center/mnt/10.66.4.243:_home_iso-test/abb85038-c616-4667-8e99-b29c345e3bd3/images/11111111-1111-1111-1111-111111111111/en_windows_server_2016_x64_dvd_9327751.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:52,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d58a72a1-6f72-4a8f-8578-c80dd9cf04dc.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d58a72a1-6f72-4a8f-8578-c80dd9cf04dc.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on -vnc :1 -monitor stdio
VM2:
/usr/libexec/qemu-kvm -name guest=peixiu_vm1,debug-threads=on -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 5e67ba8f-5056-4b09-b286-7785b21a391b -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.3-7.el7,serial=34353737-3035-4E43-3735-323530375A31,uuid=5e67ba8f-5056-4b09-b286-7785b21a391b -no-user-config -nodefaults -rtc base=2017-03-28T09:42:23,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/a41ee8e2-4e1c-4549-865d-3c9cf1bcc727/c6f00513-2812-4fbd-ab28-b6aba6079280/images/48852d3b-4bd1-4bfc-9d59-762c3470aba4/e4a3c8e8-fc19-4f8d-aed8-b4ce1f5541ec,format=raw,if=none,id=drive-ide0-0-0,serial=48852d3b-4bd1-4bfc-9d59-762c3470aba4,cache=none,werror=stop,rerror=stop,aio=threads -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/rhev/data-center/mnt/10.66.4.243:_home_iso-test/abb85038-c616-4667-8e99-b29c345e3bd3/images/11111111-1111-1111-1111-111111111111/en_windows_server_2016_x64_dvd_9327751.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/dev/mapper/36001405dbd15be0faff435e8b480d4e2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=native -device scsi-block,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/5e67ba8f-5056-4b09-b286-7785b21a391b.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/5e67ba8f-5056-4b09-b286-7785b21a391b.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on -vnc :1 -monitor stdio

used versions:
kernel-3.10.0-514.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.6.x86_64
virtio-win-1.9.0-3.el7
rhevm-4.1.1.6-0.1.el7
vdsm-4.19.10-1.el7ev.x86_64
libvirt-2.0.0-10.el7_3.5.x86_64

Comment 135 Jiri Belka 2017-03-30 07:49:58 UTC
> 1. rhv4.1 + rhel7.3 + 2 hosts + Direct Lun disk  --Failed at "List Disk To
> Be Validated" step. Report error message "Failed while verifying removal of
> any Persistent Reservation on physical disk bb8cd9eb at node test1.msfc.com."

direct-lun is passed to qemu process as /dev/mapper/$scsi_is (symlink to /dev/dm-$, rhv passes device as lun_id) with "sgio='unfiltered'" settings done by libvirt, qemu process runs as qemu user, sgio filtering is applied

> 2. qemu directly + rhel7.3 + 2 hosts + /dev/sd* directly --Passed
> 
> Note: the action that iscsi target login to 2 hosts is completed through
> rhevm UI, and after login successfully, /dev/sd* will list in "fdisk -l".

direct-lun is passed to qemu process as /dev/sd*, sgio filtering applies only to non-root processes, thus running qemu as root bypasses sgio filtering

> 3. qemu directly + rhel7.3 + 2 hosts +
> /dev/mapper/3600145dbd15be0faff435e8b480d4e2           --Passed

direct-lun is passed to qemu process as /dev/mapper/$scsi_id, sgio filtering applied only to non-root process, thus running qemu as root bypasses sgio filtering

to mimic RHV env, qemu processes should run as non-root user with having r/w access to host's block device of specified iscsi lun, taking care for sgio filtering setting.

Comment 136 Martin Tessun 2017-03-30 09:02:02 UTC
Update on the root cause of the issue:

Summary:
As qemu isn't run as root, libvirt sets the corresponding disks to "unpriv_sgio" enabled.
In case a dm-device is used, the "childrens" of this device do not get the unpriv_sgio inherited.

Analysis:

Prerequisites:
1. Install W2k12 AD on a VM in RHV.
2. Create 2 VMs in RHV with iSCSI disk as direct attached LUN
   and SGIO=unfiltered set as well as "Persistent Reservation" is set.
3. Create an Affinity rule that does not allow the tw VMs to be started on the same host.
4. Start up the VMs, install W2k12 or W2k16 and add them to the AD.
5. Install the failover clustering accordingly

Testrun:
1. Fire up the VMs
2. Run cluster validation.
   ==> Fails with "cannot remove persistent reservation" in "List Disks To Be Validated"
3. Check the unpriv_sgio settting for the direct attached LUN
   ==> Shows that only the dm-device has unpriv_sgio set to "1"
4. Set unpriv_sgio to "1" for all backing devices for that device-mapper device
5. Repeat Steps 3 and 4 on the second host.
6. Rerun the cluster validation
   ==> This time it passes.

Some command lines:
[root@inf44 devices]# cat $(find . -name unpriv_sgio) | sort -n | uniq -c
    179 0
      1 1

The multiptah device contains 8 different paths in my setup:

3600a09803830326d51244a37592f516f dm-141 NETAPP  ,LUN C-Mode     
size=200G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:1 sdm 8:192 active ready running
| |- 12:0:0:1 sdo 8:224 active ready running
| |- 16:0:0:1 sdw 65:96 active ready running
| `- 15:0:0:1 sdv 65:80 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 10:0:0:1 sdk 8:160 active ready running
  |- 9:0:0:1  sdj 8:144 active ready running
  |- 14:0:0:1 sds 65:32 active ready running
  `- 13:0:0:1 sdr 65:16 active ready running

So checking the corresponding sgio_unpriv shows:

/sys/devices/virtual/block/dm-141/queue/unpriv_sgio:1
/sys/devices/platform/host10/session7/target10:0:0/10:0:0:1/block/sdk/queue/unpriv_sgio:0
/sys/devices/platform/host11/session8/target11:0:0/11:0:0:1/block/sdm/queue/unpriv_sgio:0
/sys/devices/platform/host12/session9/target12:0:0/12:0:0:1/block/sdo/queue/unpriv_sgio:0
/sys/devices/platform/host13/session10/target13:0:0/13:0:0:1/block/sdr/queue/unpriv_sgio:0
/sys/devices/platform/host14/session11/target14:0:0/14:0:0:1/block/sds/queue/unpriv_sgio:0
/sys/devices/platform/host15/session12/target15:0:0/15:0:0:1/block/sdv/queue/unpriv_sgio:0
/sys/devices/platform/host16/session13/target16:0:0/16:0:0:1/block/sdw/queue/unpriv_sgio:0
/sys/devices/platform/host9/session6/target9:0:0/9:0:0:1/block/sdj/queue/unpriv_sgio:0

Enable unpriv_sgio for *all* affected devices.
[root@inf44 devices]# for I in /sys/devices/platform/host*/session*/target*/*/block/sd[mowvkjsr]/queue/unpriv_sgio;do echo 1 > $I;done
[root@inf44 devices]# grep -e 0 -e 1 /sys/devices/virtual/block/dm-141/queue/unpriv_sgio /sys/devices/platform/host*/session*/target*/*/block/sd[mowvkjsr]/queue/unpriv_sgio
/sys/devices/virtual/block/dm-141/queue/unpriv_sgio:1
/sys/devices/platform/host10/session7/target10:0:0/10:0:0:1/block/sdk/queue/unpriv_sgio:1
/sys/devices/platform/host11/session8/target11:0:0/11:0:0:1/block/sdm/queue/unpriv_sgio:1
/sys/devices/platform/host12/session9/target12:0:0/12:0:0:1/block/sdo/queue/unpriv_sgio:1
/sys/devices/platform/host13/session10/target13:0:0/13:0:0:1/block/sdr/queue/unpriv_sgio:1
/sys/devices/platform/host14/session11/target14:0:0/14:0:0:1/block/sds/queue/unpriv_sgio:1
/sys/devices/platform/host15/session12/target15:0:0/15:0:0:1/block/sdv/queue/unpriv_sgio:1
/sys/devices/platform/host16/session13/target16:0:0/16:0:0:1/block/sdw/queue/unpriv_sgio:1
/sys/devices/platform/host9/session6/target9:0:0/9:0:0:1/block/sdj/queue/unpriv_sgio:1
[root@inf44 devices]#

Repeat the above steps for the second host.

Comment 137 Martin Tessun 2017-03-30 12:05:46 UTC
Hi John,

can you have a look here? Maybe libvirt needs to put the unpriv_sgio down the complete device chain, as dm doesn't seem to do it.

Thanks,
Martin

Comment 138 Yaniv Kaul 2017-03-30 12:13:45 UTC
(In reply to Martin Tessun from comment #136)
> Update on the root cause of the issue:
> 
> Summary:
> As qemu isn't run as root, libvirt sets the corresponding disks to
> "unpriv_sgio" enabled.
> In case a dm-device is used, the "childrens" of this device do not get the
> unpriv_sgio inherited.
> 

But that should have failed Linux SCSI based clustering as well, right?

Comment 139 John Ferlan 2017-03-30 12:28:26 UTC
My knowledge/experience with unpriv_sgio when it comes to the device layer is extremely limited. Here are a couple of reference bz's that describe issues seen that I've kept tagged in my bz email folder because for some reason I knew some day in the future there'd be other questions - one for dm-mpath is bz 1254316 and one for fc disk is bz 1202723

From my vague recollection from the previous issues is as far as libvirt is concerned it's put that unfiltered tag and attempted to set unpriv_sgio at a much higher level. Walking some "chain" under the covers of dm-mpath wasn't something that was possible.

Comment 140 Martin Tessun 2017-03-30 12:50:32 UTC
Just to add some more information (see added SeeAlso BZ).

Especailly https://bugzilla.redhat.com/show_bug.cgi?id=1254316#c2 is relevant, as we may run into issues with multipath there.

Also a change in the multipath.conf should solve this issue as well (needs to be verified).

Still the cluster running in the VM doesn't know about the paths and cannot change anything there.

For iSCSI it is working fine as the initiator name stays the same and as such the reservation is kept on all paths.

But what about FibreChannel?

Adding Ben for his take on this.

Comment 141 Martin Tessun 2017-03-30 12:51:58 UTC
(In reply to Yaniv Kaul from comment #138)
> (In reply to Martin Tessun from comment #136)
> > Update on the root cause of the issue:
> > 
> > Summary:
> > As qemu isn't run as root, libvirt sets the corresponding disks to
> > "unpriv_sgio" enabled.
> > In case a dm-device is used, the "childrens" of this device do not get the
> > unpriv_sgio inherited.
> > 
> 
> But that should have failed Linux SCSI based clustering as well, right?

Yes. And Jiri checked it and it also did break S3-PR in a Linux guest the same way.

Comment 142 Jiri Belka 2017-03-30 13:16:09 UTC
(In reply to Martin Tessun from comment #141)
> (In reply to Yaniv Kaul from comment #138)
> > (In reply to Martin Tessun from comment #136)
> > > Update on the root cause of the issue:
> > > 
> > > Summary:
> > > As qemu isn't run as root, libvirt sets the corresponding disks to
> > > "unpriv_sgio" enabled.
> > > In case a dm-device is used, the "childrens" of this device do not get the
> > > unpriv_sgio inherited.
> > > 
> > 
> > But that should have failed Linux SCSI based clustering as well, right?
> 
> Yes. And Jiri checked it and it also did break S3-PR in a Linux guest the
> same way.

Not me, it was Raz after Yaniv asked in him in #29. So I'm not sure why it did work on Linux, maybe tests on Linux were wrong (see #47, #50)?

Comment 143 Ben Marzinski 2017-03-30 22:04:48 UTC
So, reading through this, it sounds as though it really is necessary in some situations to set unpriv_sgio on all of the scsi path devices of a multipath device for mpath_persist to work correctly. It shouldn't be that much work to add this ability to multipath, so that when it creates a multipath device it will set unpriv_sgio on all the path devices and then on the multipath device.

Comment 145 Martin Tessun 2017-05-17 06:41:09 UTC
Hi Ben,

(In reply to Ben Marzinski from comment #143)
> So, reading through this, it sounds as though it really is necessary in some
> situations to set unpriv_sgio on all of the scsi path devices of a multipath
> device for mpath_persist to work correctly. It shouldn't be that much work
> to add this ability to multipath, so that when it creates a multipath device
> it will set unpriv_sgio on all the path devices and then on the multipath
> device.

do you want me to open a BZ for this or is this already being discussed?

Comment 146 Ben Marzinski 2017-05-17 14:26:48 UTC
Sure.

Comment 151 LJ McDonald 2017-08-06 22:02:10 UTC
We're evaluating replacing our Xenserver environment with an ovirt/RHEV environment.  One of the main attractions is the Direct LUN ability and support for LUNs > 1TB.  Our proposed environment contains a pair of clustered VMs (Veritas Infoscale 7.1).  I ran into this issue when trying to setup fencing for the cluster.  I thought perhaps some details of our setup might be useful, since it is different from those previously reported.

Our testing is on a Hitachi DF600 with Persistent RSV enabled through a dual FC SAN (production will be on two pairs of FC Netapps so FC support is important).  The host sees both paths and can check (and create) reservations:

# sg_persist --in --read-keys --device=/dev/disk/by-id/scsi-1HITACHI_D60H00030000
  HITACHI   DF600F            0000
  Peripheral device type: disk
  PR generation=0xa, there are NO registered reservation keys

The VMs are RH 7.X and are passed the LUNs via Direct LUN with:

X Shareable
X Enable SCSI Pass-Through
  X Allow Privileged SCSI I/O
  X Using SCSI Reservation

However, the VMs are unable to check reservations and fail with:

# sg_persist --in --read-keys --device=/dev/disk/by-id/scsi-1HITACHI_D60H00030000
  HITACHI   DF600F            0000
  Peripheral device type: disk
PR in (Read keys): aborted command

Based on the information contained here, I did a test using a simple script:

#!/bin/sh

for HBA in /sys/class/fc_host/*/device/; do
  for DEV in `find $HBA -name unpriv_sgio`; do
    echo "1" > $DEV
  done
done

Running the above script on the host "fixes" the problem and the VMs can now check/set reservations.  In summary, the problem does occur on FC SANs, but appears to be corrected by setting unpriv_sgio in this environment as well.  I hope this information is helpful.

However, I'm not sure the above workaround is suitable for a production environment.  In particular, I'm a bit concerned about what might happen if a path were to come and go.  Will unpriv_sgio become unset, and cause my cluster to fence?  Of course I can test this too, but would prefer to hear a resolution of this bug (or effective workaround).

Comment 168 Petr Matyáš 2018-12-11 14:54:46 UTC
I've played around this, but I still have no idea what is the point of this bug or how to verify, so can you please provide verification steps and differences from bug#1111783.

Comment 169 Ryan Barry 2018-12-11 15:12:55 UTC
The point of this bug is essentially to allow Windows clustering using SCSI reservations. To verify, check comment#164, and/or attempt to set up a Windows cluster on RHV with reservations set.

The difference between this and 1111783 is that it is with a single disk mapped to multiple VMs rather than direct LUN passthrough

Comment 170 Petr Matyáš 2018-12-11 15:52:09 UTC
Well, then if it is enough to have qemu-pr-helper spawned whenever a VM with a shared disk is launched, we have no problem here.

Verified on vdsm-4.30.4-1.el7ev.x86_64

Comment 174 Eli Marcus 2019-02-18 11:18:52 UTC
Updated the doc text to match https://bugzilla.redhat.com/show_bug.cgi?id=1111783 

"With this release, Windows clustering is supported for based direct attached LUNs and shared disks."

Comment 175 Ryan Barry 2019-02-18 11:31:23 UTC
Looks ok to me

Comment 177 errata-xmlrpc 2019-05-08 12:35:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1077

Comment 180 Red Hat Bugzilla 2023-09-14 23:57:44 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days