Description of problem: During the hotplug, the engine does pass the reservations param to the host. ~~~ 2021-12-02 17:59:47,596+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-6) [3cda65f7] START, HotPlugDiskVDSCommand(HostName = dell-r530-4.gsslab.pnq.redhat.com, HotPlugDiskVDSParameters:{hostId='213ae779-00b1-4a7c-abce-f8f5c116b7bf', vmId='9af0a1ea-18df-4d94-95dc-41b28c088ced', diskId='11dd18bc-9977-4a48-8a00-8afe133634e8'}), log id: 7332387e 2021-12-02 17:59:47,598+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-6) [3cda65f7] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug> <devices> <disk snapshot="no" type="block" device="lun" sgio="unfiltered"> <target dev="sda" bus="scsi"/> <source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994"> <reservations managed="yes"/> <<<<<<<<<<<< <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="native" type="raw" error_policy="stop" cache="none"/> <alias name="ua-11dd18bc-9977-4a48-8a00-8afe133634e8"/> <address bus="0" controller="0" unit="1" type="drive" target="0"/> </disk> </devices> ~~~ I can see the same on the hotplug vdsm API. ~~~ 2021-12-02 17:59:47,605+0530 INFO (jsonrpc/1) [api.virt] START hotplugDisk(params={'vmId': '9af0a1ea-18df-4d94-95dc-41b28c088ced', 'xml': '<?xml version="1.0" encoding="UTF-8"?><hotplug><devices><disk snapshot="no" type="block" device="lun" sgio="unfiltered"><target dev="sda" bus="scsi"></target><source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994"><reservations managed="yes"></reservations>...... vmId=9af0a1ea-18df-4d94-95dc-41b28c088ced (api:48) ~~~ But not in the xml which it is sending to libvirtd. ~~~ 2021-12-02 17:59:48,715+0530 INFO (jsonrpc/1) [virt.vm] (vmId='9af0a1ea-18df-4d94-95dc-41b28c088ced') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?> <disk device="lun" sgio="unfiltered" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="1" /> <source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994"> <seclabel model="dac" relabel="no" type="none" /> </source> <target bus="scsi" dev="sdb" /> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" /> <alias name="ua-11dd18bc-9977-4a48-8a00-8afe133634e8" /> </disk> (vm:3851) ~~~ Looks like the "reservations" is silently getting dropped at "diskParams = storagexml.parse(elem, meta)". So the disk is created without "reservations" param and hence the `qemu-pr-helper` process will not get generated. ~~~ virsh -r dumpxml vmname |grep -A 9 -i sgio <disk type='block' device='lun' sgio='unfiltered' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/dev/mapper/360014050cb6f143b0cd4bd19dc461994' index='3'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='sdb' bus='scsi'/> <alias name='ua-11dd18bc-9977-4a48-8a00-8afe133634e8'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> ~~ The user has to stop and start the VM for the reservations to work. After stop and start of the VM. ~~~ virsh -r dumpxml kubevirt |grep -A 9 -i sgio <disk type='block' device='lun' sgio='unfiltered' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/dev/mapper/360014050cb6f143b0cd4bd19dc461994' index='1'> <seclabel model='dac' relabel='no'/> <reservations managed='yes'> <<<<<<<<<<<<<<<<<<<< <source type='unix' path='/var/lib/libvirt/qemu/domain-3-kubevirt/pr-helper0.sock' mode='client'/> </reservations> </source> <backingStore/> <target dev='sdb' bus='scsi'/> ~~~ Version-Release number of selected component (if applicable): vdsm-4.40.90.4-1.el8ev.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create and start a VM. 2. Hotplug a direct LUN disk by checking "Using SCSI Reservation". 3. Check in the VMs XML if reservations parameters are added in XML. ~~~ virsh -r dumpxml vm-name ~~~ 4. Also, there will not be "qemu-pr-helper" process created for the VM. Actual results: SCSI reservation is not working for hot plugged VM disks.
*** Bug 2063515 has been marked as a duplicate of this bug. ***
Mark was able to fix it locally and will post the changes next week
Verification steps: 1. Create and start a VM. 2. Hotplug a direct LUN disk, checking "Using SCSI Reservation". 3. Check in the VMs XML that the reservations parameter exists From vdsm.log: 2022-03-21 19:54:24,216+0200 INFO (jsonrpc/3) [api.virt] START hotplugDisk(params={'vmId': 'e96fed6e-8fc0-494e-a3fe-44907818786a', 'x ml': '<?xml version="1.0" encoding="UTF-8"?><hotplug><devices><disk snapshot="no" type="block" device="lun" sgio="unfiltered"><target dev="sda" bus="scsi"></target><source dev="/dev/mapper/360014055831b9397bc24130bf9702f64"><reservations managed="yes"></reservations>< seclabel model="dac" type="none" relabel="no"></seclabel></source><driver name="qemu" io="native" type="raw" error_policy="stop" cache ="none"></driver><alias name="ua-0342bcb6-5cce-4a0d-86ee-8156604d6524"></alias><address bus="0" controller="0" unit="4" type="drive" t arget="0"></address></disk></devices><metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0"><ovirt-vm:vm><ovirt-vm:device devtype="disk" n ame="sda"><ovirt-vm:GUID>360014055831b9397bc24130bf9702f64</ovirt-vm:GUID></ovirt-vm:device></ovirt-vm:vm></metadata></hotplug>'}) fro m=::ffff:10.35.206.251,35880, flow_id=2cfb392, vmId=e96fed6e-8fc0-494e-a3fe-44907818786a (api:48) ... 2022-03-21 19:54:24,335+0200 INFO (jsonrpc/3) [virt.vm] (vmId='e96fed6e-8fc0-494e-a3fe-44907818786a') Hotplug disk xml: <?xml version ='1.0' encoding='utf-8'?> <disk device="lun" sgio="unfiltered" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="4" /> <source dev="/dev/mapper/360014055831b9397bc24130bf9702f64"> <reservations managed="yes" /> <seclabel model="dac" relabel="no" type="none" /> </source> <target bus="scsi" dev="sdd" /> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" /> <alias name="ua-0342bcb6-5cce-4a0d-86ee-8156604d6524" /> </disk> (vm:3780) 2022-03-21 19:54:24,434+0200 INFO (jsonrpc/3) [api.virt] FINISH hotplugDisk return={'status': {'code': 0, 'message': 'Done'}, 'vmList ': {}} from=::ffff:10.35.206.251,35880, flow_id=2cfb392, vmId=e96fed6e-8fc0-494e-a3fe-44907818786a (api:54) $ virsh -r dumpxml vm1 ... <disk type='block' device='lun' sgio='unfiltered' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/dev/mapper/360014057e1ce24ac002426abb4ff33cb' index='17'> <seclabel model='dac' relabel='no'/> <reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-3-vm1/pr-helper0.sock' mode='client'/> </reservations> </source> <backingStore/> <target dev='sdf' bus='scsi'/> <alias name='ua-02725b03-ff42-4cf0-8a64-f073f70647b8'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> ...
Verified - The reservations parameter exists 2022-04-23 22:55:04,948+0300 INFO (jsonrpc/3) [virt.vm] (vmId='025bc483-d353-45fb-8b32-5e6c863182ff') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?> <disk device="lun" sgio="unfiltered" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0" /> <source dev="/dev/mapper/3600a098038304479363f4c4870455032"> <reservations managed="yes" /> <seclabel model="dac" relabel="no" type="none" /> </source> <target bus="scsi" dev="sda" /> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" /> <alias name="ua-a4b5303a-6d03-40a2-92d0-1712759a1ed2" /> </disk> Versions: vdsm-4.50.0.12-1.el8ev.x86_64 ovirt-engine-4.5.0.2-0.7.el8ev.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: RHV RHEL Host (ovirt-host) [ovirt-4.5.0] security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4764
No need to add new TC, because the TP of the scenario won't run during the maintenance period