Bug 1862059 - [ppc64le] 'sPAPR VSCSI' interface disk attachment is not seen from the guest
Summary: [ppc64le] 'sPAPR VSCSI' interface disk attachment is not seen from the guest
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: ppc64le
OS: Linux
medium
high
Target Milestone: rc
: 8.4
Assignee: Daniel Henrique Barboza (IBM)
QA Contact: Xujun Ma
URL:
Whiteboard:
Depends On:
Blocks: 1801710 1862035
TreeView+ depends on / blocked
 
Reported: 2020-07-30 10:16 UTC by Ilan Zuckerman
Modified: 2021-05-25 06:42 UTC (History)
12 users (show)

Fixed In Version: qemu-kvm-5.2.0-0.module+el8.4.0+8855+a9e237a9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-25 06:42:26 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
engine, vdsm, libvirt, qemu log for the vm (4.11 MB, application/zip)
2020-07-30 10:16 UTC, Ilan Zuckerman
no flags Details

Description Ilan Zuckerman 2020-07-30 10:16:45 UTC
Created attachment 1702926 [details]
engine, vdsm, libvirt, qemu log for the vm

Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:Description of problem:

In web admin, When attaching 'sPAPR VSCSI' interface disk to a guest, it is not seen from the vm it self (guest), but clearly visible as attached disk in the web ui.

sPAPR VSCSI Disk details:
ID = '9e6754f5-cfa1-4837-b85f-7cc17f58a37a'
Name = disk_spapr_vscsiraw_2912461594
Size = 1 GiB 
Interface =  sPAPR VSCSI


Details from the guest:

Before attaching the disk:

[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0   10G  0 disk 
├─vda1        252:1    0    4M  0 part 
├─vda2        252:2    0    1G  0 part /boot
└─vda3        252:3    0    9G  0 part 
  ├─rhel-root 253:0    0    8G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
[root@localhost ~]# 
[root@localhost ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0Ulzid2isFWYLAM4mv2w3Lo224lz7y0Fpmjbt -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0UlziQFdINMtltXGPYhgJJrDD1Ci40AR3jX2N -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 lvm-pv-uuid-HCIIQ0-IHV8-roZ3-ybaX-Ec75-YcmD-QJY3vP -> ../../vda3
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 scsi-0QEMU_QEMU_CD-ROM_drive-ua-7790f65b-5e3a-4098-8ea7-31e6447da866 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part3 -> ../../vda3

=======================================================

After attaching the disk (still not seen):

[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0   10G  0 disk 
├─vda1        252:1    0    4M  0 part 
├─vda2        252:2    0    1G  0 part /boot
└─vda3        252:3    0    9G  0 part 
  ├─rhel-root 253:0    0    8G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
[root@localhost ~]# 
[root@localhost ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-name-rhel-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0Ulzid2isFWYLAM4mv2w3Lo224lz7y0Fpmjbt -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 dm-uuid-LVM-3YITUFxLEvcTjJ7VBBvkNT84gTM0UlziQFdINMtltXGPYhgJJrDD1Ci40AR3jX2N -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 lvm-pv-uuid-HCIIQ0-IHV8-roZ3-ybaX-Ec75-YcmD-QJY3vP -> ../../vda3
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 scsi-0QEMU_QEMU_CD-ROM_drive-ua-7790f65b-5e3a-4098-8ea7-31e6447da866 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part2 -> ../../vda2
lrwxrwxrwx. 1 root root 10 Jul 30 04:02 virtio-26ed2c47-330e-4a9b-8-part3 -> ../../vda3

=======================================================


DOMAIN XML from engine log that clearly shows the device was added to the vm:


2020-07-30 11:12:12,143+03 INFO  [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] Running command: AttachDiskToVmCommand internal: false. Entities affected :  ID: ed5681bf-6792-471a-8362-8e1d4e7872f2 Type: VMAction group CONFIGURE_VM_STORAGE with
 role type USER,  ID: 9e6754f5-cfa1-4837-b85f-7cc17f58a37a Type: DiskAction group ATTACH_DISK with role type USER
2020-07-30 11:12:12,242+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] START, HotPlugDiskVDSCommand(HostName = host_mixed_2, HotPlugDiskVDSParameters:{hostId='cd5cf2b7-f3ca-4195-ab11-7d310b56e4e7', vmId='ed5681bf-6792-471a-8362-8e1d
4e7872f2', diskId='9e6754f5-cfa1-4837-b85f-7cc17f58a37a', addressMap='[bus=0, controller=0, unit=4, type=drive, target=0]'}), log id: 6d24bda
2020-07-30 11:12:12,260+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-32) [7df30711-381b-4281-9adb-505c21a4eae1] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
  <devices>
    <disk snapshot="no" type="block" device="disk">
      <target dev="sda" bus="scsi"/>
      <source dev="/rhev/data-center/mnt/blockSD/dd8f9d57-5d7f-4fb3-9983-2ae98008fcdf/images/9e6754f5-cfa1-4837-b85f-7cc17f58a37a/e558b793-cdb6-4dd9-9ed9-574c2a08797f">
        <seclabel model="dac" type="none" relabel="no"/>
      </source>
      <driver name="qemu" io="native" type="raw" error_policy="stop" cache="none"/>
      <alias name="ua-9e6754f5-cfa1-4837-b85f-7cc17f58a37a"/>
      <address bus="0" controller="0" unit="4" type="drive" target="0"/>
      <serial>9e6754f5-cfa1-4837-b85f-7cc17f58a37a</serial>
    </disk>
  </devices>
  <metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:vm>
      <ovirt-vm:device devtype="disk" name="sda">
        <ovirt-vm:poolID>632988e5-25af-4df2-a1fd-1e4316ad2496</ovirt-vm:poolID>
        <ovirt-vm:volumeID>e558b793-cdb6-4dd9-9ed9-574c2a08797f</ovirt-vm:volumeID>
        <ovirt-vm:imageID>9e6754f5-cfa1-4837-b85f-7cc17f58a37a</ovirt-vm:imageID>
        <ovirt-vm:domainID>dd8f9d57-5d7f-4fb3-9983-2ae98008fcdf</ovirt-vm:domainID>
      </ovirt-vm:device>
    </ovirt-vm:vm>
  </metadata>
</hotplug>



Full print:
http://pastebin.test.redhat.com/889078



Version-Release number of selected component (if applicable):

rhv-release-4.4.1-12-001.noarch

[root@ibm-p8-08 ~]# rpm -qa | grep libvirt
libvirt-libs-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-nwfilter-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-gluster-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-mpath-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-admin-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-nodedev-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-qemu-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-kvm-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-network-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-disk-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-logical-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-interface-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
python3-libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.ppc64le
libvirt-lock-sanlock-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-core-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-config-network-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-iscsi-direct-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-scsi-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-client-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-bash-completion-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-config-nwfilter-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-iscsi-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-storage-rbd-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le
libvirt-daemon-driver-secret-6.0.0-25.module+el8.2.1+7154+47ffd890.ppc64le


How reproducible:
100%

Steps to Reproduce:
1. On PPC env, create rhel8 vm from template
2. Create sPAPR VSCSI disk and attach it to the vm

Actual results:
The disk isnt seen from within the vm

Expected results:
The newly attached disk should be seen from within the vm


Attaching logs:
engine, vdsm, libvirt, qemu log for the vm

Comment 1 Peter Krempa 2020-07-30 10:32:36 UTC
Note that the spapr vscsi adapter doesn't support hotplug: https://bugzilla.redhat.com/show_bug.cgi?id=1192355

Comment 2 Avihai 2020-07-30 11:21:00 UTC
(In reply to Peter Krempa from comment #1)
> Note that the spapr vscsi adapter doesn't support hotplug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1192355

Hi Peter, 
This bug was opened for RHEL7.
Is the another for tracking also in RHEL8 or somewhere in RHEL8 release note as well ?

Comment 3 Peter Krempa 2020-07-30 11:35:49 UTC
I'm not sure what the solution will be at this time. I've just pointed out that it's a known problem for now. I'm leaving this bug open and untriaged.

Comment 4 Ilan Zuckerman 2020-08-02 05:32:27 UTC
Updating steps to reproduce:

Steps to Reproduce:
1. On PPC env, create rhel8 vm from template
2. Create sPAPR VSCSI disk and HOT PLUG it to the vm (state = started)

Comment 5 David Gibson 2020-08-03 03:59:29 UTC
As Peter notes, the spapr vscsi interface does not support hotplug (in fact it was only really ever designed to support a single disk per interface instance).

There's not really anything we can do about that at the qemu or kernel levels.

Peter, does libvirt have any way of flagging a particular bridge model as not supporting hotplug, so it's less surprising to the user?

Comment 6 Peter Krempa 2020-08-03 05:17:00 UTC
Not really anything systematic. Obviously we could add ad-hoc code for this into the hotplug code path. That's the reason I didn't close the bug. Obviously one would expect that if qemu doesn't support hotplug of a particular device it would report an error.

Comment 7 Xujun Ma 2020-08-04 02:40:48 UTC
When hotplug a spapr-vscsi controller, will report a error as following:
{'execute': 'device_add', 'arguments': {'driver': 'spapr-vscsi'}}
{"error": {"class": "GenericError", "desc": "Bus 'spapr-vio' does not support hotplugging"}}

Comment 8 Qunfang Zhang 2020-08-04 03:45:01 UTC
(In reply to Xujun Ma from comment #7)
> When hotplug a spapr-vscsi controller, will report a error as following:
> {'execute': 'device_add', 'arguments': {'driver': 'spapr-vscsi'}}
> {"error": {"class": "GenericError", "desc": "Bus 'spapr-vio' does not
> support hotplugging"}}

The above error should be from qemu-kvm level.

Comment 9 David Gibson 2020-08-05 02:09:37 UTC
Xujun, that error is from attempting to hot-plug a whole spapr-vscsi controller.  That's also not supported, but that's not the case this bug is talking about.  Here we're talking aboud hotplugging a single disk ('scsi-hd' device) onto an existing spapr-vscsi controller.

We need to check if that's returning an error, and if not see if we can make it generate one.

Comment 10 Xujun Ma 2020-08-07 05:57:09 UTC
(In reply to David Gibson from comment #9)
> Xujun, that error is from attempting to hot-plug a whole spapr-vscsi
> controller.  That's also not supported, but that's not the case this bug is
> talking about.  Here we're talking aboud hotplugging a single disk
> ('scsi-hd' device) onto an existing spapr-vscsi controller.
> 
> We need to check if that's returning an error, and if not see if we can make
> it generate one.

Hi David,

I tried again.I find there is no error message when hotplug one scsi-hd device onto a spar-vscsi controller.
I can find it with "info block" command but can't find this scsi-hd device in guest.I can find it after rebooting guest.
The scsi-hd device actually was hotpluged onto spapr-vscsi,but just kernel can't support hotpluging this device.

Comment 11 Daniel Henrique Barboza (IBM) 2020-08-20 19:26:49 UTC
The issue is that we are not invalidating (setting to NULL) the hotplug_handler
of the spapr_vscsi device, making qdev believe that this is a regular
SCSI bus that supports hotplug. The device is added to qdev, but
is unusable to the guest unless you reboot the guest OS.

A fix was posted here:


https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg04906.html

Comment 12 Peter Krempa 2020-08-24 09:33:33 UTC
Moving to qemu-kvm per the above coment. It doesn't feel worth it duplicating the check in libvirt.

Comment 13 David Gibson 2020-09-14 04:27:49 UTC
Fix is now merged upstream, so we should get it via rebase.

Comment 15 Xujun Ma 2020-12-03 04:39:39 UTC
Have tested this case with qemu-kvm-5.2.0-0.module+el8.4.0+8855+a9e237a9,now displays "Error: Bus 'a.0' does not support hotplugging" if hotplug a scsi-hd device to spapr-vscsi.
So base the test result above,the bug has been fixed.

Comment 19 Xujun Ma 2020-12-18 10:56:15 UTC
Base on the test result of comment 15,the bug has been fixed.set it to verified.

Comment 21 errata-xmlrpc 2021-05-25 06:42:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2098


Note You need to log in before you can comment on or make changes to this bug.