Bug 1862035 - [ppc64le] 'sPAPR VSCSI' interface disk attachment is not seen from the guest.
Summary: [ppc64le] 'sPAPR VSCSI' interface disk attachment is not seen from the guest.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.4.5
Hardware: ppc64le
OS: Linux
unspecified
high
Target Milestone: ovirt-4.4.7
: 4.4.7
Assignee: Arik
QA Contact: sshmulev
URL:
Whiteboard:
Depends On: 1862059
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-30 09:16 UTC by Ilan Zuckerman
Modified: 2021-07-06 07:28 UTC (History)
4 users (show)

Fixed In Version: ovirt-engine-4.4.7
Doc Type: Bug Fix
Doc Text:
Previously, hot-plug of virtual disks with sPAPR VSCSI interface seemed to succeed but the disks were not seen within the guest operating system on PPC environments. Now, hot-plugging a virtual disk with sPAPR VSCSI interface to a running virtual machine on PPC environments is prevented since sPAPR VSCSI adapter doesn't support hot-plug.
Clone Of:
Environment:
Last Closed: 2021-07-06 07:28:17 UTC
oVirt Team: Virt
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
engine, vdsm, libvirt (4.11 MB, application/zip)
2020-07-30 09:16 UTC, Ilan Zuckerman
no flags Details
qemu log of the vm (4.33 KB, text/plain)
2020-07-30 10:09 UTC, Ilan Zuckerman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 114778 0 master MERGED core: block hot-(un)plug of spapr vscsi disks 2021-05-13 14:26:38 UTC

Description Ilan Zuckerman 2020-07-30 09:16:11 UTC
Created attachment 1702914 [details]
engine, vdsm, libvirt

Description of problem:

In web admin, When attaching 'sPAPR VSCSI' interface disk to a guest, it is not seen from the vm it self (guest), but clearly visible as attached disk in the web ui.

sPAPR VSCSI Disk details:
ID = '9e6754f5-cfa1-4837-b85f-7cc17f58a37a'
Name = disk_spapr_vscsiraw_2912461594
Size = 1 GiB 
Interface =  sPAPR VSCSI


Details from the guest:

Before attaching the disk:

[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0   10G  0 disk 
├─vda1        252:1    0    4M  0 part 
├─vda2        252:2    0    1G  0 part /boot
└─vda3        252:3    0    9G  0 part 
  ├─rhel-root 253:0    0    8G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
[root@localhost ~]# 
[root@localhost ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0Ulzid2isFWYLAM4mv2w3Lo224lz7y0Fpmjbt -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0UlziQFdINMtltXGPYhgJJrDD1Ci40AR3jX2N -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 lvm-pv-uuid-HCIIQ0-IHV8-roZ3-ybaX-Ec75-YcmD-QJY3vP -> ../../vda3
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 scsi-0QEMU_QEMU_CD-ROM_drive-ua-7790f65b-5e3a-4098-8ea7-31e6447da866 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part3 -> ../../vda3

=======================================================

After attaching the disk (still not seen):

[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0   10G  0 disk 
├─vda1        252:1    0    4M  0 part 
├─vda2        252:2    0    1G  0 part /boot
└─vda3        252:3    0    9G  0 part 
  ├─rhel-root 253:0    0    8G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
[root@localhost ~]# 
[root@localhost ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0Ulzid2isFWYLAM4mv2w3Lo224lz7y0Fpmjbt -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0UlziQFdINMtltXGPYhgJJrDD1Ci40AR3jX2N -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 lvm-pv-uuid-HCIIQ0-IHV8-roZ3-ybaX-Ec75-YcmD-QJY3vP -> ../../vda3
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 scsi-0QEMU_QEMU_CD-ROM_drive-ua-7790f65b-5e3a-4098-8ea7-31e6447da866 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part3 -> ../../vda3

=======================================================

xml from engine log that clearly shows the device was added to the vm:


2020-07-30 11:12:12,143+03 INFO  [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] Running command: AttachDiskToVmCommand internal: false. Entities affected :  ID: ed5681bf-6792-471a-8362-8e1d4e7872f2 Type: VMAction group CONFIGURE_VM_STORAGE with
 role type USER,  ID: 9e6754f5-cfa1-4837-b85f-7cc17f58a37a Type: DiskAction group ATTACH_DISK with role type USER
2020-07-30 11:12:12,242+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] START, HotPlugDiskVDSCommand(HostName = host_mixed_2, HotPlugDiskVDSParameters:{hostId='cd5cf2b7-f3ca-4195-ab11-7d310b56e4e7', vmId='ed5681bf-6792-471a-8362-8e1d
4e7872f2', diskId='9e6754f5-cfa1-4837-b85f-7cc17f58a37a', addressMap='[bus=0, controller=0, unit=4, type=drive, target=0]'}), log id: 6d24bda
2020-07-30 11:12:12,260+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
  <devices>
    <disk snapshot="no" type="block" device="disk">
      <target dev="sda" bus="scsi"/>
      <source dev="/rhev/data-center/mnt/blockSD/dd8f9d57-5d7f-4fb3-9983-2ae98008fcdf/images/9e6754f5-cfa1-4837-b85f-7cc17f58a37a/e558b793-cdb6-4dd9-9ed9-574c2a08797f">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="raw" error_policy="stop" cache="none"/>
      <alias name="ua-9e6754f5-cfa1-4837-b85f-7cc17f58a37a"/>
      <address bus="0" controller="0" unit="4" type="drive" target="0"/>
      <serial>9e6754f5-cfa1-4837-b85f-7cc17f58a37a</serial>
    </disk>
  </devices>
  <metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:vm>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:poolID>632988e5-25af-4df2-a1fd-1e4316ad2496</ovirt-vm:poolID>
        <ovirt-vm:volumeID>e558b793-cdb6-4dd9-9ed9-574c2a08797f</ovirt-vm:volumeID>
        <ovirt-vm:imageID>9e6754f5-cfa1-4837-b85f-7cc17f58a37a</ovirt-vm:imageID>
        <ovirt-vm:domainID>dd8f9d57-5d7f-4fb3-9983-2ae98008fcdf</ovirt-vm:domainID>
      </ovirt-vm:device>
    </ovirt-vm:vm>
  </metadata>
</hotplug>



Full print:
http://pastebin.test.redhat.com/889078



Version-Release number of selected component (if applicable):
rhv-release-4.4.1-12-001.noarch

How reproducible:
100%

Steps to Reproduce:
1. On PPC env, create rhel8 vm from template
2. Create sPAPR VSCSI disk and attach it to the vm

Actual results:
The disk isnt seen from within the vm

Expected results:
The newly attached disk should be seen from within the vm


Attaching logs:
engine, vdsm, libvirt

Comment 1 Arik 2020-07-30 09:40:03 UTC
This is likely to be a platform issue - the device appears in the domain xml and libvirt keeps reporting it.
I'd suggest to file another bug to libvirt/qemu about this and in RHV this should be tracked as a storage bug.

Comment 2 Ilan Zuckerman 2020-07-30 10:09:43 UTC
Created attachment 1702923 [details]
qemu log of the vm

Comment 3 Ilan Zuckerman 2020-08-02 05:30:49 UTC
Updating steps to reproduce:

Steps to Reproduce:
1. On PPC env, create rhel8 vm from template
2. Create sPAPR VSCSI disk and HOT PLUG it to the vm (state = started)

Comment 4 Avihai 2020-08-03 14:40:18 UTC
Libvirt bug opened on this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1862059

Comment 5 sshmulev 2021-04-11 15:12:57 UTC
Failed to hotplug the sPAPR VSCSI disk, in the UI the disk is in "inactive" status, Plus it is not visible from the guest.

Verified it according to these steps:
1. On PPC env, create rhel8 VM from template
2. Create sPAPR VSCSI disk and HOT PLUG it to the VM (state = started)


versions:
ovirt-engine-4.4.6.3-0.8.el8ev.noarch
vdsm-4.40.60.3-1.el8ev.ppc64le

logs:

engine log:
2021-04-11 17:23:44,468+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [3d509b24] Command 'HotPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='bbd7e15e-c955-4359-91ef-7f2d823b80af', vmId='2c3cc011-2f22-47ef-a14b-7a3a764b1951', diskId='fa9d5a74-ac6c-4d86-97b2-b807c19e0490'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'device_add': Bus 'scsi0.0' does not support hotplugging, code = 45

audit log :
VDSM host_mixed_1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command 'device_add': Bus 'scsi0.0' does not support hotplugging
Failed to plug disk New_VM_Disk1 to VM New_VM (User: admin@internal-authz).

Comment 6 Arik 2021-04-11 18:53:50 UTC
What is the RHEL version of the host?

Comment 7 Arik 2021-04-11 19:03:52 UTC
and that of qemu-kvm please

Comment 8 sshmulev 2021-04-11 19:28:03 UTC
Red Hat Enterprise Linux release 8.4 (Ootpa)

QEMU emulator version 5.2.0 (qemu-kvm-5.2.0-14.module+el8.4.0+10425+ad586fa5)

Comment 9 Arik 2021-04-11 20:11:43 UTC
Thanks.
Reading bz 1862059 more carefully, that's ok since spapr vscsi adapter doesn't support hotplug (see https://bugzilla.redhat.com/show_bug.cgi?id=1862059#c15)

Comment 12 sshmulev 2021-06-01 13:06:24 UTC
Verified.

versions:
ovirt-engine-4.4.7-0.23.el8ev.noarch
QEMU emulator version 5.2.0 (qemu-kvm-5.2.0-16.module+el8.4.0+10806+b7d97207)


Steps:
1) On a PPC env, start a VM and try to plug the sPAPR VSCSI interface to it (hotplug).
2) Repeat the above with the stopped VM (cold plug).

Expected results:
1) Action should be blocked
2) Action should not be blocked and the attached disk is visible from the VM as well as from the UI admin.

Actual results as expected.

Comment 13 Sandro Bonazzola 2021-07-06 07:28:17 UTC
This bugzilla is included in oVirt 4.4.7 release, published on July 6th 2021.

Since the problem described in this bug report should be resolved in oVirt 4.4.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.