Bug 1301273 - [ppc64le] hotplug a spapr-vscsi disks doens't appear in the vm (after a really long amount of hours)
Summary: [ppc64le] hotplug a spapr-vscsi disks doens't appear in the vm (after a reall...
Keywords:
Status: CLOSED DUPLICATE of bug 1192355
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 3.6.1.3
Hardware: ppc64le
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Allon Mureinik
QA Contact: Aharon Canan
URL:
Whiteboard: virt
Depends On:
Blocks: RHEV3.6PPC
TreeView+ depends on / blocked
 
Reported: 2016-01-23 12:11 UTC by Carlos Mestre González
Modified: 2016-02-21 11:01 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-01-25 11:09:20 UTC
oVirt Team: Virt
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
Logs (vdsm, engine, libvirt, qemu, guest agent) (828.00 KB, application/x-gzip)
2016-01-23 12:16 UTC, Carlos Mestre González
no flags Details

Description Carlos Mestre González 2016-01-23 12:11:35 UTC
Description of problem:
Try to hotplug  a spapr-vscsi disk to a vm, the operation success (even in the vdsm logs I see an interface (?)) but the vm doesn't seems to see it (lsblk, no by the guest agent, nothing on dmesg/messages).

After 2 hours nothing, the weird thing is the disk appears if I live the vm a long time (over night) or just turn the vm down and up again.

Version-Release number of selected component (if applicable):
rhevm-3.6.1.3-0.1.el6.noarch

host:
kernel: 3.10.0-327.2.1.el7.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.4.ppc64le
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
qemu-kvm-common-rhev-2.3.0-31.el7_2.4.ppc64le
qemu-kvm-tools-rhev-2.3.0-31.el7_2.5.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.4.ppc64le
qemu-kvm-rhev-debuginfo-2.3.0-31.el7_2.4.ppc64le
vdsm-xmlrpc-4.17.13-1.el7ev.noarch
vdsm-cli-4.17.13-1.el7ev.noarch
vdsm-4.17.13-1.el7ev.noarch
vdsm-yajsonrpc-4.17.13-1.el7ev.noarch
vdsm-infra-4.17.13-1.el7ev.noarch
vdsm-jsonrpc-4.17.13-1.el7ev.noarch
vdsm-python-4.17.13-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le

guest:
RHEL 7.2 - kernel 3.10.0-327.2.1.el7.ppc64le

How reproducible:
100%


Steps to Reproduce:
1. Create a run a vm with rhel kernel 3.10.0-327.2.1.el7.ppc64le
2. Disk -> Add new vm with spapr-vscsi, doesn't matter the sparse, size, domain

Actual results:
Hotplug operation succeeds, Vm doesn't see the new disk added after a long period of time (2 hours), tested with lsblk, nothing in the vm logs, nothing in the rest by <logical_image>

Expected results:
You see the disk in the vm as in any other interface after a few seconds of the vm being hotplugged.

Additional info:
Will put more info in the logs, adding this to Storage, though I think this maybe a virt one but I'm not sure

Comment 1 Carlos Mestre González 2016-01-23 12:16:40 UTC
Created attachment 1117448 [details]
Logs (vdsm, engine, libvirt, qemu, guest agent)

vm id: 7a348ed-187a-4066-b64b-b4e408040448

engine start and hotplug create the disk:
2016-01-21 20:48:55,789 INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-70) [] VM 'a7a348ed-187a-4066-b64b-b4e408040448'(test_sparp_vscsi) moved from 'WaitForLaunch' --> 'PoweringUp'
7.0.0.1:8702-15) [2ce970da] Running command: AddDiskCommand internal: false. Entities affected :  ID: a7a348ed-187a-4066-b64b-b4e408040448 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER,  ID: aa1d1568-448c-48fe-aad8-2c5b128b7d05 Type: StorageAction group CREATE_DISK with role type USER

disk created:
2016-01-21 20:49:57,595 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp-/127.0.0.1:8702-15) [69dcf2d6] START, CreateImageVDSCommand( CreateImageVDSCommandParameters:{runAsync='true', storagePoolId='b3115183-d522-428b-9dce-2809fe39a79d', ignoreFailoverLimit='false', storageDomainId='aa1d1568-448c-48fe-aad8-2c5b128b7d05', imageGroupId='932b1b49-606c-4505-ac2d-aa242a0a269d', imageSizeInBytes='1073741824', volumeFormat='RAW', newImageId='feb22608-d79e-4301-8d91-79bd3269acb8', newImageDescription='{"DiskAlias":"test_sparp_vscsi_Disk1","DiskDescription":""}', imageInitialSizeInBytes='0'}), log id: 5cbeb7fe

vdsm.log creation:
 out\n1024000 bytes (1.0 MB) copied, 0.015782 s, 64.9 MB/s\n'; <rc> = 0
jsonrpc.Executor/4::DEBUG::2016-01-21 13:49:42,074::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.create' in bridge with {u'preallocate': 2, u'volFormat': 5, u'srcImgUUID': u'00000000-0000-0000-0000-000000000000', u'volumeID': u'feb22608-d79e-4301-8d91-79bd3269acb8', u'imageID': u'932b1b49-606c-4505-ac2d-aa242a0a269d', u'storagepoolID': u'b3115183-d522-428b-9dce-2809fe39a79d', u'storagedomainID': u'aa1d1568-448c-48fe-aad8-2c5b128b7d05', u'desc': u'{"DiskAlias":"test_sparp_vscsi_Disk1","DiskDescription":""}', u'diskType': 2, u'srcVolUUID': u'00000000-0000-0000-0000-000000000000', u'size': u'1073741824'}


[...]
jsonrpc.Executor/7::INFO::2016-01-21 13:49:46,775::vm::2598::virt.vm::(hotplugDisk) vmId=`a7a348ed-187a-4066-b64b-b4e408040448`::Hotplug disk xml: <disk device="disk" snapshot="no" type="file">
    <address bus="0" controller="0" target="0" type="drive" unit="4"/>
    <source file="/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/aa1d1568-448c-48fe-aad8-2c5b128b7d05/images/932b1b49-606c-4505-ac2d-aa242a0a269d/feb22608-d79e-4301-8d91-79bd3269acb8"/>
    <target bus="scsi" dev="sdc"/>
    <serial>932b1b49-606c-4505-ac2d-aa242a0a269d</serial>
    <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
</disk>

[...]
jsonrpc.Executor/7::DEBUG::2016-01-21 13:49:46,813::vm::4309::virt.vm::(_getUnderlyingDriveInfo) vmId=`a7a348ed-187a-4066-b64b-b4e408040448`::Matched {'name': (u'sdc', u'sdc'), 'bootOrder': ('', None), 'boot': ([], None), 'readonly': (False, False), 'address': ({u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'4'}, {u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'4'}), 'path': (u'/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/aa1d1568-448c-48fe-aad8-2c5b128b7d05/images/932b1b49-606c-4505-ac2d-aa242a0a269d/feb22608-d79e-4301-8d91-79bd3269acb8', u'/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/aa1d1568-448c-48fe-aad8-2c5b128b7d05/images/932b1b49-606c-4505-ac2d-aa242a0a269d/feb22608-d79e-4301-8d91-79bd3269acb8'), 'type': (u'disk', u'disk')}
jsonrpc.Executor/7::DEBUG::2016-01-21 13:49:46,813::vm::4329::virt.vm::(_getUnderlyingDriveInfo) vmId=`a7a348ed-187a-4066-b64b-b4e408040448`::Matched {'name': (u'sdc', None), 'bootOrder': ('', None), 'boot': ([], None), 'readonly': (False, None), 'address': ({u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'4'}, None), 'path': (u'/rhev/data-center/b3115183-d522-428b-9dce-2809fe39a79d/aa1d1568-448c-48fe-aad8-2c5b128b7d05/images/932b1b49-606c-4505-ac2d-aa242a0a269d/feb22608-d79e-4301-8d91-79bd3269acb8', None), 'type': (u'disk', None)}

Comment 2 Yaniv Kaul 2016-01-24 07:50:55 UTC
Anything on the guest side? What is 'vda' on the guest?
Jan 21 13:52:59 dhcp167-130 drmgr: drmgr: -c pci -a -s 0x40000028 -n -d4 -V
Jan 21 13:53:00 dhcp167-130 kernel: pci 0000:00:05.0: BAR 1: assigned [mem 0x100a2000000-0x100a2000fff]
Jan 21 13:53:00 dhcp167-130 kernel: pci 0000:00:05.0: BAR 0: assigned [io  0x10400-0x1043f]
Jan 21 13:53:00 dhcp167-130 kernel: virtio-pci 0000:00:05.0: enabling device (0000 -> 0003)
Jan 21 13:53:00 dhcp167-130 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Jan 21 13:53:00 dhcp167-130 kernel: vda: unknown partition table

Comment 3 Carlos Mestre González 2016-01-25 09:19:39 UTC
Yaniv,

vda is another disk that I added for testing, a Virtio one, that works perfectly fine. In the logs I've added that and another sparp-vscsi disk (so I tested one thin and another preallocated), same result.

On the guest side: I checked the guest on the dmsg, messages, the /dev/, lsblk and the expected device name with fdisk but nothing is there.

Comment 4 Yaniv Kaul 2016-01-25 11:09:20 UTC

*** This bug has been marked as a duplicate of bug 1192355 ***


Note You need to log in before you can comment on or make changes to this bug.