Bug 2028481 - SCSI reservation is not working for hot plugged VM disks
Summary: SCSI reservation is not working for hot plugged VM disks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.4.8
Hardware: All
OS: Linux
high
high
Target Milestone: ovirt-4.5.0
: 4.5.0
Assignee: Mark Kemel
QA Contact: Shir Fishbain
URL:
Whiteboard:
: 2063515 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-02 13:19 UTC by nijin ashok
Modified: 2022-10-17 14:02 UTC (History)
12 users (show)

Fixed In Version: vdsm-4.50.0.11
Doc Type: Bug Fix
Doc Text:
Previously, SCSI reservation was not set for disks that are hot-plugged. In this release, the SCSI reservation works for disks that are being hot-plugged.
Clone Of:
Environment:
Last Closed: 2022-05-26 17:22:44 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt vdsm pull 105 0 None Merged Add SCSI reservations support to hotplug API 2022-03-22 15:30:40 UTC
Red Hat Issue Tracker RHV-44134 0 None None None 2021-12-02 13:26:32 UTC
Red Hat Knowledge Base (Solution) 6817921 0 None None None 2022-10-17 14:02:22 UTC
Red Hat Product Errata RHSA-2022:4764 0 None None None 2022-05-26 17:23:06 UTC

Description nijin ashok 2021-12-02 13:19:37 UTC
Description of problem:

During the hotplug, the engine does pass the reservations param to the host.

~~~
2021-12-02 17:59:47,596+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-6) [3cda65f7] START, HotPlugDiskVDSCommand(HostName = dell-r530-4.gsslab.pnq.redhat.com, HotPlugDiskVDSParameters:{hostId='213ae779-00b1-4a7c-abce-f8f5c116b7bf', vmId='9af0a1ea-18df-4d94-95dc-41b28c088ced', diskId='11dd18bc-9977-4a48-8a00-8afe133634e8'}), log id: 7332387e
2021-12-02 17:59:47,598+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-6) [3cda65f7] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
  <devices>
    <disk snapshot="no" type="block" device="lun" sgio="unfiltered">
      <target dev="sda" bus="scsi"/>
      <source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994">
        <reservations managed="yes"/>                                <<<<<<<<<<<<
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="raw" error_policy="stop" cache="none"/>
      <alias name="ua-11dd18bc-9977-4a48-8a00-8afe133634e8"/>
      <address bus="0" controller="0" unit="1" type="drive" target="0"/>
    </disk>
  </devices>

~~~

I can see the same on the hotplug vdsm API.

~~~
2021-12-02 17:59:47,605+0530 INFO  (jsonrpc/1) [api.virt] START hotplugDisk(params={'vmId': '9af0a1ea-18df-4d94-95dc-41b28c088ced', 'xml': '<?xml version="1.0" encoding="UTF-8"?><hotplug><devices><disk snapshot="no" type="block" device="lun" sgio="unfiltered"><target dev="sda" bus="scsi"></target><source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994"><reservations managed="yes"></reservations>...... vmId=9af0a1ea-18df-4d94-95dc-41b28c088ced (api:48)
~~~

But not in the xml which it is sending to libvirtd.

~~~
2021-12-02 17:59:48,715+0530 INFO  (jsonrpc/1) [virt.vm] (vmId='9af0a1ea-18df-4d94-95dc-41b28c088ced') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?>
<disk device="lun" sgio="unfiltered" snapshot="no" type="block">
    <address bus="0" controller="0" target="0" type="drive" unit="1" />
    <source dev="/dev/mapper/360014050cb6f143b0cd4bd19dc461994">
        <seclabel model="dac" relabel="no" type="none" />
    </source>
    <target bus="scsi" dev="sdb" />
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
    <alias name="ua-11dd18bc-9977-4a48-8a00-8afe133634e8" />
</disk>
 (vm:3851)
~~~

Looks like the "reservations" is silently getting dropped at "diskParams = storagexml.parse(elem, meta)". So the disk is created without "reservations" param and hence the `qemu-pr-helper` process will not get generated.

~~~
virsh -r dumpxml vmname |grep -A 9 -i sgio
    <disk type='block' device='lun' sgio='unfiltered' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/360014050cb6f143b0cd4bd19dc461994' index='3'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='ua-11dd18bc-9977-4a48-8a00-8afe133634e8'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
~~


The user has to stop and start the VM for the reservations to work. 

After stop and start of the VM.
~~~
virsh -r dumpxml kubevirt |grep -A 9 -i sgio
    <disk type='block' device='lun' sgio='unfiltered' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/360014050cb6f143b0cd4bd19dc461994' index='1'>
        <seclabel model='dac' relabel='no'/>
        <reservations managed='yes'>                        <<<<<<<<<<<<<<<<<<<<
          <source type='unix' path='/var/lib/libvirt/qemu/domain-3-kubevirt/pr-helper0.sock' mode='client'/>
        </reservations>
      </source>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
~~~


Version-Release number of selected component (if applicable):

vdsm-4.40.90.4-1.el8ev.x86_64

How reproducible:

100%

Steps to Reproduce:

1. Create and start a VM.
2. Hotplug a direct LUN disk by checking "Using SCSI Reservation".
3. Check in the VMs XML if reservations parameters are added in XML.

~~~
virsh -r dumpxml vm-name
~~~

4. Also, there will not be "qemu-pr-helper" process created for the VM.


Actual results:

SCSI reservation is not working for hot plugged VM disks.

Comment 2 Arik 2022-03-16 07:41:32 UTC
*** Bug 2063515 has been marked as a duplicate of this bug. ***

Comment 3 Arik 2022-03-17 07:44:16 UTC
Mark was able to fix it locally and will post the changes next week

Comment 9 Mark Kemel 2022-03-23 15:46:22 UTC
Verification steps:

1. Create and start a VM.
2. Hotplug a direct LUN disk, checking "Using SCSI Reservation".
3. Check in the VMs XML that the reservations parameter exists

From vdsm.log:

2022-03-21 19:54:24,216+0200 INFO  (jsonrpc/3) [api.virt] START hotplugDisk(params={'vmId': 'e96fed6e-8fc0-494e-a3fe-44907818786a', 'x
ml': '<?xml version="1.0" encoding="UTF-8"?><hotplug><devices><disk snapshot="no" type="block" device="lun" sgio="unfiltered"><target 
dev="sda" bus="scsi"></target><source dev="/dev/mapper/360014055831b9397bc24130bf9702f64"><reservations managed="yes"></reservations><
seclabel model="dac" type="none" relabel="no"></seclabel></source><driver name="qemu" io="native" type="raw" error_policy="stop" cache
="none"></driver><alias name="ua-0342bcb6-5cce-4a0d-86ee-8156604d6524"></alias><address bus="0" controller="0" unit="4" type="drive" t
arget="0"></address></disk></devices><metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0"><ovirt-vm:vm><ovirt-vm:device devtype="disk" n
ame="sda"><ovirt-vm:GUID>360014055831b9397bc24130bf9702f64</ovirt-vm:GUID></ovirt-vm:device></ovirt-vm:vm></metadata></hotplug>'}) fro
m=::ffff:10.35.206.251,35880, flow_id=2cfb392, vmId=e96fed6e-8fc0-494e-a3fe-44907818786a (api:48)

...

2022-03-21 19:54:24,335+0200 INFO  (jsonrpc/3) [virt.vm] (vmId='e96fed6e-8fc0-494e-a3fe-44907818786a') Hotplug disk xml: <?xml version
='1.0' encoding='utf-8'?>
<disk device="lun" sgio="unfiltered" snapshot="no" type="block">
    <address bus="0" controller="0" target="0" type="drive" unit="4" />
    <source dev="/dev/mapper/360014055831b9397bc24130bf9702f64">
        <reservations managed="yes" />
        <seclabel model="dac" relabel="no" type="none" />
    </source>
    <target bus="scsi" dev="sdd" />
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
    <alias name="ua-0342bcb6-5cce-4a0d-86ee-8156604d6524" />
</disk>
 (vm:3780)

2022-03-21 19:54:24,434+0200 INFO  (jsonrpc/3) [api.virt] FINISH hotplugDisk return={'status': {'code': 0, 'message': 'Done'}, 'vmList
': {}} from=::ffff:10.35.206.251,35880, flow_id=2cfb392, vmId=e96fed6e-8fc0-494e-a3fe-44907818786a (api:54)

$ virsh -r dumpxml vm1
...
    <disk type='block' device='lun' sgio='unfiltered' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/360014057e1ce24ac002426abb4ff33cb' index='17'>
        <seclabel model='dac' relabel='no'/>
        <reservations managed='yes'>
          <source type='unix' path='/var/lib/libvirt/qemu/domain-3-vm1/pr-helper0.sock' mode='client'/>
        </reservations>
      </source>
      <backingStore/>
      <target dev='sdf' bus='scsi'/>
      <alias name='ua-02725b03-ff42-4cf0-8a64-f073f70647b8'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
...

Comment 11 Shir Fishbain 2022-04-23 20:04:09 UTC
Verified - The reservations parameter exists

2022-04-23 22:55:04,948+0300 INFO  (jsonrpc/3) [virt.vm] (vmId='025bc483-d353-45fb-8b32-5e6c863182ff') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?>
<disk device="lun" sgio="unfiltered" snapshot="no" type="block">
    <address bus="0" controller="0" target="0" type="drive" unit="0" />
    <source dev="/dev/mapper/3600a098038304479363f4c4870455032">
        <reservations managed="yes" />
        <seclabel model="dac" relabel="no" type="none" />
    </source>
    <target bus="scsi" dev="sda" />
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
    <alias name="ua-a4b5303a-6d03-40a2-92d0-1712759a1ed2" />
</disk>

Versions: 
vdsm-4.50.0.12-1.el8ev.x86_64
ovirt-engine-4.5.0.2-0.7.el8ev.noarch

Comment 18 errata-xmlrpc 2022-05-26 17:22:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: RHV RHEL Host (ovirt-host) [ovirt-4.5.0] security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:4764

Comment 19 Shir Fishbain 2022-08-20 21:30:28 UTC
No need to add new TC, because the TP of the scenario won't run during the maintenance period


Note You need to log in before you can comment on or make changes to this bug.