Bug 1710323 - Microsoft failover cluster is not working with FC direct LUN on Windows 2016 server and Windows 2019
Summary: Microsoft failover cluster is not working with FC direct LUN on Windows 2016 ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.3.1
Hardware: All
OS: Linux
high
high
Target Milestone: ovirt-4.4.10
: ---
Assignee: Vojtech Juranek
QA Contact: Petr Kubica
URL:
Whiteboard:
: 1740427 (view as bug list)
Depends On: 1892576
Blocks: 2019011
TreeView+ depends on / blocked
 
Reported: 2019-05-15 10:31 UTC by Ulhas Surse
Modified: 2023-09-07 20:01 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-08 10:08:35 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
pkubica: needinfo-


Attachments (Terms of Use)
validation report (44.90 KB, text/html)
2020-04-01 12:28 UTC, Roman Hodain
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1711045 0 unspecified CLOSED Windows 2016 guests report "The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported... 2023-12-15 16:30:38 UTC
Red Hat Knowledge Base (Solution) 4741251 0 None None None 2020-01-17 18:39:55 UTC
Red Hat Product Errata RHBA-2022:0462 0 None None None 2022-02-08 10:08:41 UTC

Description Ulhas Surse 2019-05-15 10:31:40 UTC
Description of problem:

Microsoft Failover Cluster Validation fails. 
When trying to configure a Microsoft failover cluster on Windows 2016 as a guest on RHV 4.3 (two Vms running on different hosts), the validation fails with the error. 

Version-Release number of selected component (if applicable):
RHV 4.3
ovirt-engine-4.3.3.6-0.1.el7.noarch
libvirt-4.5.0-10.el7_6.7.x86_64 
vdsm-4.30.13-1.el7ev.x86_64

RHEL-H
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"

How reproducible:
Always

Steps to Reproduce:
1. Two VMs Windows 2016 server running on two different servers. 
2. The disk is a Direct FC LUN (500 GB) Compellent, shared between two VMs. 
3. All selected:

X shareable
X enable scsi pass-through
X allow privileged scsi i/o
X using scsi reservation
X The Guest agent is installed on both the VMs.


Actual results:
Validation failed

Expected results:
Validation should succeed.

Comment 17 Daniel Gur 2019-08-28 13:13:35 UTC
sync2jira

Comment 18 Daniel Gur 2019-08-28 13:17:49 UTC
sync2jira

Comment 19 Paolo Bonzini 2020-02-28 15:25:59 UTC
Closing according to the previous comments and to Ulhas's suggestion.

Comment 20 Roman Hodain 2020-04-01 12:22:28 UTC
I created a new environment with two win 2016 systems running on RHEL 7.7 hypervisor
    kernel 3.10.0-1062.18.1.el7.x86_64
    vdsm-4.30.40-1.el7ev.x86_64
    qemu-kvm-rhev-2.12.0-33.el7_7.8.x86_64

The hypervisor is connected to dellemc me4024 storage over FC. Both of the Win VMs were stopped before the test and started again so no hot-plug happened.


The cluster verification in the Windows VM failed (viz the attached html report), but I was able to create and reserve exclusive access on the hypervisor by sg commands. So I checked the reservation during the test and I did not see any reservation requests. The reason is most probably that the LUN within the Win system (only one) flips to read only mode during the test. I do not see any errors anywhere. 

As I have set up the win cluster already a couple of times with different storage and I always hit an issue. I most probably do something wrong. Do we have somebody from QE who is responsible for this feature? Just to make sure that me and the customer do not do anything wrong?

The reservation test on the hypervisor [1]
The list of processes for the two win VMs [2].
The scsi version is 6 [3].

[1]:
Register key:
[root@dell-r430-01 ~]# sg_persist --out --register --param-sark=0xaaaaaaaa /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk

List the keys:
[root@dell-r430-01 ~]# sg_persist --in -k -d /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk
  PR generation=0x8, 1 registered reservation key follows:
    0xaaaaaaaa

Reserve exclusive access:
[root@dell-r430-01 ~]# sg_persist --out --reserve --param-rk=0xaaaaaaaa --prout-type=8 /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk


List reservation:
[root@dell-r430-01 ~]# sg_persist --in -r -d /dev/mapper/3600c0ff00050bbcad240845e01000000 
  DellEMC   ME4               G275
  Peripheral device type: disk
  PR generation=0x8, Reservation follows:
    Key=0x0
    scope: LU_SCOPE,  type: Exclusive Access, all registrants

Release reservation:
[root@dell-r430-01 ~]# sg_persist --out --release --param-rk=0xaaaaaaaa --prout-type=8 /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk

Unregister key:
[root@dell-r430-01 ~]# sg_persist --out --register --param-rk=0xaaaaaaaa /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk

List keys:
[root@dell-r430-01 ~]# sg_persist --in -k -d /dev/mapper/3600c0ff00050bbcad240845e01000000
  DellEMC   ME4               G275
  Peripheral device type: disk
  PR generation=0x9, there are NO registered reservation keys



[2]:
[root@dell-r430-01 ~]# ps -ef | grep rhodain-winc
root     13358 13295  0 12:55 pts/0    00:00:00 grep --color=auto rhodain-winc
qemu     15190     1  6 10:00 ?        00:11:18 /usr/libexec/qemu-kvm -name guest=rhodain-wincl01,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-26-rhodain-wincl01/master-key.aes -object pr-manager-helper,id=pr-helper0,path=/var/lib/libvirt/qemu/domain-26-rhodain-wincl01/pr-helper0.sock -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu SandyBridge,pcid=on,spec-ctrl=on,ssbd=on,md-clear=on,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_synic,hv_stimer -m size=20971520k,slots=16,maxmem=83886080k -realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -object iothread,id=iothread1 -numa node,nodeid=0,cpus=0-3,mem=20480 -uuid 1df143b1-a82f-4cdc-8ea1-6f56993ad5bd -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.7-10.el7,serial=4C4C4544-0031-5910-8033-B8C04F304432,uuid=1df143b1-a82f-4cdc-8ea1-6f56993ad5bd -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=47,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2020-04-01T09:00:26,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=30000,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,iothread=iothread1,id=ua-6711b1af-f04b-4511-8fae-e446101584c0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=ua-5c9b1ef6-ecff-4549-b1fa-670df79ce1e1,max_ports=16,bus=pci.0,addr=0x4 -drive file=/rhev/data-center/mnt/10.44.129.169:_exports_iso__domain01/af7ecc02-72cf-46e3-a166-12678d8a6644/images/11111111-1111-1111-1111-111111111111/RHV-toolsSetup_4.3_10.iso,format=raw,if=none,id=drive-ua-dc2c00b8-0022-43ba-966a-6d8198129b4f,werror=report,rerror=report,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ua-dc2c00b8-0022-43ba-966a-6d8198129b4f,id=ua-dc2c00b8-0022-43ba-966a-6d8198129b4f -drive file=/rhev/data-center/mnt/blockSD/97b96cac-93ae-4d2e-a3bf-3553c48e4e19/images/d488df32-5134-4797-a418-2ade63d8a9cc/8b10e3e8-6ea8-4791-852a-42eee61da875,format=qcow2,if=none,id=drive-ua-d488df32-5134-4797-a418-2ade63d8a9cc,serial=d488df32-5134-4797-a418-2ade63d8a9cc,werror=stop,rerror=stop,cache=none,aio=native -device scsi-hd,bus=ua-6711b1af-f04b-4511-8fae-e446101584c0.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-d488df32-5134-4797-a418-2ade63d8a9cc,id=ua-d488df32-5134-4797-a418-2ade63d8a9cc,bootindex=1,write-cache=on -drive file=/dev/mapper/3600c0ff00050bbcad240845e01000000,file.pr-manager=pr-helper0,format=raw,if=none,id=drive-ua-fd06fa9a-1a70-488a-ac50-e17c560945f7,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native -device scsi-block,bus=ua-6711b1af-f04b-4511-8fae-e446101584c0.0,channel=0,scsi-id=0,lun=2,share-rw=on,drive=drive-ua-fd06fa9a-1a70-488a-ac50-e17c560945f7,id=ua-fd06fa9a-1a70-488a-ac50-e17c560945f7 -netdev tap,fds=50:51:52:53,id=hostua-d3429e8d-4e00-4048-8ac7-e6bcd33c6488,vhost=on,vhostfds=30:34:35:36 -device virtio-net-pci,mq=on,vectors=10,host_mtu=1500,netdev=hostua-d3429e8d-4e00-4048-8ac7-e6bcd33c6488,id=ua-d3429e8d-4e00-4048-8ac7-e6bcd33c6488,mac=00:1a:4a:16:01:8b,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=37,server,nowait -device virtserialport,bus=ua-5c9b1ef6-ecff-4549-b1fa-670df79ce1e1.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 -chardev socket,id=charchannel1,fd=38,server,nowait -device virtserialport,bus=ua-5c9b1ef6-ecff-4549-b1fa-670df79ce1e1.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=ua-5c9b1ef6-ecff-4549-b1fa-670df79ce1e1.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5903,tls-port=5904,addr=10.37.192.41,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -vnc 10.37.192.41:5,password -k en-us -device qxl-vga,id=ua-7b7be6f0-5a77-4a4b-9e72-db3197d1d33a,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=ua-e1103efc-b288-4d9e-8bbf-15e466eee99a,bus=pci.0,addr=0x6 -object rng-random,id=objua-f3fde066-6555-440f-8eda-443abc0a85de,filename=/dev/urandom -device virtio-rng-pci,rng=objua-f3fde066-6555-440f-8eda-443abc0a85de,id=ua-f3fde066-6555-440f-8eda-443abc0a85de,bus=pci.0,addr=0x7 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
root     15195     1  0 10:00 ?        00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-26-rhodain-wincl01/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-26-rhodain-wincl01/pr-helper0.pid
qemu     20187     1  7 10:13 ?        00:11:40 /usr/libexec/qemu-kvm -name guest=rhodain-wincl02,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-27-rhodain-wincl02/master-key.aes -object pr-manager-helper,id=pr-helper0,path=/var/lib/libvirt/qemu/domain-27-rhodain-wincl02/pr-helper0.sock -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu SandyBridge,pcid=on,spec-ctrl=on,ssbd=on,md-clear=on,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_synic,hv_stimer -m size=20971520k,slots=16,maxmem=83886080k -realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -object iothread,id=iothread1 -numa node,nodeid=0,cpus=0-3,mem=20480 -uuid 2c881ff1-f594-41c2-9094-73e55b41a5e9 -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.7-10.el7,serial=4C4C4544-0031-5910-8033-B8C04F304432,uuid=2c881ff1-f594-41c2-9094-73e55b41a5e9 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=33,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2020-04-01T09:13:48,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=30000,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,iothread=iothread1,id=ua-e3537bbc-1ec2-4241-97ca-8eab47be1dc7,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=ua-782175ff-6b8e-404b-9305-464dc016f5d7,max_ports=16,bus=pci.0,addr=0x3 -drive file=/rhev/data-center/mnt/10.44.129.169:_exports_iso__domain01/af7ecc02-72cf-46e3-a166-12678d8a6644/images/11111111-1111-1111-1111-111111111111/RHV-toolsSetup_4.3_10.iso,format=raw,if=none,id=drive-ua-160f339f-02cf-41c9-81f5-21b216ecb51a,werror=report,rerror=report,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ua-160f339f-02cf-41c9-81f5-21b216ecb51a,id=ua-160f339f-02cf-41c9-81f5-21b216ecb51a -drive file=/rhev/data-center/mnt/blockSD/97b96cac-93ae-4d2e-a3bf-3553c48e4e19/images/c216d256-da62-4ca3-b3f8-007797d2a641/8c5e16fa-d324-47fc-aa89-418f8904bdbe,format=qcow2,if=none,id=drive-ua-c216d256-da62-4ca3-b3f8-007797d2a641,serial=c216d256-da62-4ca3-b3f8-007797d2a641,werror=stop,rerror=stop,cache=none,aio=native -device scsi-hd,bus=ua-e3537bbc-1ec2-4241-97ca-8eab47be1dc7.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-c216d256-da62-4ca3-b3f8-007797d2a641,id=ua-c216d256-da62-4ca3-b3f8-007797d2a641,bootindex=1,write-cache=on -drive file=/dev/mapper/3600c0ff00050bbcad240845e01000000,file.pr-manager=pr-helper0,format=raw,if=none,id=drive-ua-fd06fa9a-1a70-488a-ac50-e17c560945f7,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native -device scsi-block,bus=ua-e3537bbc-1ec2-4241-97ca-8eab47be1dc7.0,channel=0,scsi-id=0,lun=1,share-rw=on,drive=drive-ua-fd06fa9a-1a70-488a-ac50-e17c560945f7,id=ua-fd06fa9a-1a70-488a-ac50-e17c560945f7 -netdev tap,fds=35:36:37:38,id=hostua-40b9619e-c4c5-4bd9-be3c-ce2977d7991b,vhost=on,vhostfds=39:40:41:42 -device virtio-net-pci,mq=on,vectors=10,host_mtu=1500,netdev=hostua-40b9619e-c4c5-4bd9-be3c-ce2977d7991b,id=ua-40b9619e-c4c5-4bd9-be3c-ce2977d7991b,mac=00:1a:4a:16:01:8c,bus=pci.0,addr=0x7 -chardev socket,id=charchannel0,fd=43,server,nowait -device virtserialport,bus=ua-782175ff-6b8e-404b-9305-464dc016f5d7.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 -chardev socket,id=charchannel1,fd=44,server,nowait -device virtserialport,bus=ua-782175ff-6b8e-404b-9305-464dc016f5d7.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=ua-782175ff-6b8e-404b-9305-464dc016f5d7.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.37.192.41:0,password -k en-us -spice port=5901,tls-port=5902,addr=10.37.192.41,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=ua-2fa72ef0-d482-4e19-a3e4-1a28472b3d44,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=ua-f87b2597-44d3-4df1-a93e-f33c5415e106,bus=pci.0,addr=0x5 -object rng-random,id=objua-a4b247af-b004-4755-823f-3986e507c268,filename=/dev/urandom -device virtio-rng-pci,rng=objua-a4b247af-b004-4755-823f-3986e507c268,id=ua-a4b247af-b004-4755-823f-3986e507c268,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
root     20189     1  0 10:13 ?        00:00:00 /usr/bin/qemu-pr-helper -k /var/lib/libvirt/qemu/domain-27-rhodain-wincl02/pr-helper0.sock -f /var/lib/libvirt/qemu/domain-27-rhodain-wincl02/pr-helper0.pid


[3]:
# grep -A 1 DellEMC /proc/scsi/scsi
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Enclosure                        ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Enclosure                        ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Enclosure                        ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Enclosure                        ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06
--
  Vendor: DellEMC  Model: ME4              Rev: G275
  Type:   Direct-Access                    ANSI  SCSI revision: 06

Comment 21 Roman Hodain 2020-04-01 12:28:53 UTC
Created attachment 1675396 [details]
validation report

Comment 28 Roman Hodain 2020-10-29 09:14:44 UTC
I created BUG 1892576 that I believe is the cause of the issue.

Comment 30 Tal Nisan 2021-02-04 10:04:28 UTC
Seems like bug 1892576, I'm moving this bug to MODIFIED as it should be fixed as well

Comment 47 Petr Kubica 2021-05-10 10:34:51 UTC
Tested iSCSI version https://bugzilla.redhat.com/show_bug.cgi?id=1898049
FC version fails in the same way like iSCSI one did
so I will just copy my comment from there

-- COPIED comment 12 from iSCSI bug https://bugzilla.redhat.com/show_bug.cgi?id=1898049 --
tested and unfortunately getting an issue. After I started the validation part inside MS Failover cluster, it tries to do iSCSI reservation on the volume and VM is marked as paused and unreachable due to unknown storage error. See details below.

Reproduction steps
1. Have an environment (hosts, clean iscsi volumes)
2. Altered /etc/multipath.conf - added line "reservation_key file" in default section (didn't change anything) - restarted services (multipathd, vdsmd)
3. Two VMs with installed roles for MPIO and MS Failover Clustering - they share iSCSI volume (checked privileged IO, iscsi reservations on both VMs)
4. Configure a storage pool in one of the Windows VM (check second VM)
5. Run Cluster Failover Validation wizard 
Result: when it try to do the Validation of iSCSI-3 persistent reservation, VM is paused

engine log:
2021-05-10 09:50:20,926+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-23) [3fba4286] VM '1592d300-4211-45b2-b11a-d27bfc73824b'(windows-a) moved from 'Up' --> 'Paused'
2021-05-10 09:50:20,975+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-23) [3fba4286] EVENT_ID: VM_PAUSED(1,025), VM windows-a has been paused.
2021-05-10 09:50:20,985+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-23) [3fba4286] EVENT_ID: VM_PAUSED_ERROR(139), VM windows-a has been paused due to unknown storage error.

vdsm log from host
2021-05-10 09:50:20,918+0300 INFO  (libvirt/events) [virt.vm] (vmId='1592d300-4211-45b2-b11a-d27bfc73824b') abnormal vm stop device ua-a0620dda-ce8c-402f-be5e-baa545b00b25 error  (vm:4936)
2021-05-10 09:50:20,918+0300 INFO  (libvirt/events) [virt.vm] (vmId='1592d300-4211-45b2-b11a-d27bfc73824b') CPU stopped: onIOError (vm:5778)
2021-05-10 09:50:20,920+0300 INFO  (libvirt/events) [virt.vm] (vmId='1592d300-4211-45b2-b11a-d27bfc73824b') CPU stopped: onSuspend (vm:5778)
2021-05-10 09:50:20,953+0300 WARN  (libvirt/events) [virt.vm] (vmId='1592d300-4211-45b2-b11a-d27bfc73824b') device sdb reported I/O error (vm:3901)

libvirt/qemu log:
-blockdev '{"driver":"host_device","filename":"/rhev/data-center/mnt/blockSD/056fa6e5-36c0-4c72-b479-9ed865ed9444/images/ea3b406b-01d3-4b8c-8de2-cc8859be28e7/949765eb-b384-48cc-a915-87d2e1f7710f","aio":"native","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device scsi-hd,bus=ua-083ed37d-f4df-495a-b08f-27d7a504b936.0,channel=0,scsi-id=0,lun=0,device_id=ea3b406b-01d3-4b8c-8de2-cc8859be28e7,drive=libvirt-2-format,id=ua-ea3b406b-01d3-4b8c-8de2-cc8859be28e7,bootindex=1,write-cache=on,serial=ea3b406b-01d3-4b8c-8de2-cc8859be28e7,werror=stop,rerror=stop \
-blockdev '{"driver":"host_device","filename":"/dev/mapper/3600a098038304479363f4c487045514f","aio":"native","pr-manager":"pr-helper0","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device scsi-block,bus=ua-083ed37d-f4df-495a-b08f-27d7a504b936.0,channel=0,scsi-id=0,lun=1,share-rw=on,drive=libvirt-1-format,id=ua-a0620dda-ce8c-402f-be5e-baa545b00b25,werror=stop,rerror=stop \

Using correct multipath
device-mapper-multipath-0.8.4-10.el8.x86_64
device-mapper-multipath-libs-0.8.4-10.el8.x86_64
-- END OF COPY

also checked registration keys on the FC volume
# mpathpersist --in -k -d /dev/mapper/3600a09803830447a4f244c4657616f6f
  PR generation=0x11, 	4 registered reservation keys follow:
    0x6317f0467466734d
    0x6317f0467466734d
    0x6317f0467466734d
    0x6317f0467466734d

Comment 49 Arik 2021-07-14 16:23:11 UTC
*** Bug 1740427 has been marked as a duplicate of this bug. ***

Comment 52 Petr Kubica 2021-11-01 15:49:48 UTC
Verified with
ovirt-engine-4.4.9.3-0.3.el8ev.noarch

It's necessary to do 2 things:
1) enable iSCSI reservation in /etc/multipath.conf on all hosts:

defaults {
   ...(omitted)...
   reservation_key file
}

2) configure engine:
# engine-config -s PropagateDiskErrors=true
# systemctl restart ovirt-engine

Comment 59 errata-xmlrpc 2022-02-08 10:08:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV RHEL Host (ovirt-host) [ovirt-4.4.10]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0462


Note You need to log in before you can comment on or make changes to this bug.