Bug 923773 - vmHotplugDisk failed with "VolumeError: Bad volume specification"
Summary: vmHotplugDisk failed with "VolumeError: Bad volume specification"
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.3.1
Assignee: Yeela Kaplan
QA Contact: Katarzyna Jachim
URL:
Whiteboard: storage
Depends On:
Blocks: 1044068
TreeView+ depends on / blocked
 
Reported: 2013-03-20 12:14 UTC by Katarzyna Jachim
Modified: 2022-07-09 05:57 UTC (History)
14 users (show)

Fixed In Version: vdsm-4.10.2.19.0
Doc Type: Bug Fix
Doc Text:
Previously, vmHotplugDisk failed with "VolumeError: Bad volume specification". This was due to a multipath race, which resulted in a situation in which multipath was not given enough time to create the /dev/mapper entry for the LUN on the host the virtual machine is running on. A patch now allows multipath the time it needs to create the /dev/mapper entry for the LUN on the host the virtual machine is running on.
Clone Of:
: 1044068 (view as bug list)
Environment:
Last Closed: 2014-01-12 14:11:41 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
abaron: Triaged+


Attachments (Terms of Use)
whole test logs (with vdsm & engine logs etc.) (7.50 MB, application/x-bzip2)
2013-03-20 14:03 UTC, Katarzyna Jachim
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47067 0 None None None 2022-07-09 05:57:43 UTC
Red Hat Product Errata RHSA-2013:0886 0 normal SHIPPED_LIVE Moderate: rhev 3.2 - vdsm security and bug fix update 2013-06-11 00:25:02 UTC
oVirt gerrit 13454 0 None None None Never
oVirt gerrit 21173 0 None None None Never
oVirt gerrit 21181 0 None None None Never
oVirt gerrit 21182 0 None None None Never
oVirt gerrit 21211 0 None None None Never

Description Katarzyna Jachim 2013-03-20 12:14:57 UTC
Description of problem:
Attaching iscsi disk to running VM failed with error "VolumeError: Bad volume specification". According to logs LUN existed on storage server and server was connected.

Thread-265::DEBUG::2013-03-20 11:06:15,273::BindingXMLRPC::913::vds::(wrapper) client [10.35.161.17]::call vmHotplugDisk with ({'vmId': '7d65ff39-0bc9-4085-bf98-e0e5497c27f7', 'drive': {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'readonly': 'false', 'deviceId': '821dbcc6-9ceb-4e40-abcf-76170241960b', 'propagateErrors': 'off', 'device': 'disk', 'shared': 'false', 'GUID': '1ce110704-604d-40c2', 'optional': 'false'}},) {}
Thread-265::ERROR::2013-03-20 11:06:15,274::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 251, in vmHotplugDisk
    return vm.hotplugDisk(params)
  File "/usr/share/vdsm/API.py", line 423, in hotplugDisk
    return curVm.hotplugDisk(params)
  File "/usr/share/vdsm/libvirtvm.py", line 1760, in hotplugDisk
    diskParams['path'] = self.cif.prepareVolumePath(diskParams)
  File "/usr/share/vdsm/clientIF.py", line 291, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'readonly': 'false', 'deviceId': '821dbcc6-9ceb-4e40-abcf-76170241960b', 'propagateErrors': 'off', 'device': 'disk', 'shared': 'false', 'GUID': '1ce110704-604d-40c2', 'optional': 'false'}

Version-Release number of selected component (if applicable):


How reproducible:
Happened once in automated tests: 
http://jenkins.qa.lab.tlv.redhat.com:8080/job/Developer_job/728/

Steps to Reproduce:
1. create 2 LUNs and a VM
2. attach one of the LUNs when the VM is down
3. start VM
4. attach another LUN

Actual results:
attaching second LUN failed with following error
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'readonly': 'false', 'deviceId': '821dbcc6-9ceb-4e40-abcf-76170241960b', 'propagateErrors': 'off', 'device': 'disk', 'shared': 'false', 'GUID': '1ce110704-604d-40c2', 'optional': 'false'}

Expected results:
attaching second LUN should succeed

Additional info:

Comment 1 Haim 2013-03-20 13:51:34 UTC
please attach logs.
please confirm VM is installed with an operating system.

Comment 2 Katarzyna Jachim 2013-03-20 14:03:12 UTC
Created attachment 713255 [details]
whole test logs (with vdsm & engine logs etc.)

Comment 3 Katarzyna Jachim 2013-03-20 14:32:22 UTC
According to http://jenkins.qa.lab.tlv.redhat.com:8080/job/Developer_job/728/artifact/logs/automation_run.log this VM was in state "up" at 2013-03-20 11:06:14 and 2013-03-20 11:06:17. The installed system was RHEL6.4. At 2013-03-20 11:06:13,802 test invoked an 'ls' command on the VM (you can see it in the above log file) and it passed.

Comment 4 Ayal Baron 2013-03-22 22:41:28 UTC
Yeela, please take a look at this one

Comment 5 Yeela Kaplan 2013-03-24 17:53:51 UTC
This looks like another bug resulting from the multipath race.
You can see connectSS:
Thread-264::INFO::2013-03-20 11:06:15,214::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '1424248b-0a09-49a7-877d-c
5e205767381'}]}

in : 11:06:15,214

and then right afterwards in :  11:06:15,273:
Thread-265::DEBUG::2013-03-20 11:06:15,273::BindingXMLRPC::913::vds::(wrapper) client [10.35.161.17]::call vmHotplugDisk with ({'vmId': '7d65ff39-0bc9-4085-bf98-e0e5497c27f7', 'drive': {'iface': 'v
irtio', 'format': 'raw', 'type': 'disk', 'readonly': 'false', 'deviceId': '821dbcc6-9ceb-4e40-abcf-76170241960b', 'propagateErrors': 'off', 'device': 'disk', 'shared': 'false', 'GUID': '1ce110704-6
04d-40c2', 'optional': 'false'}},) {}
Thread-265::ERROR::2013-03-20 11:06:15,274::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 251, in vmHotplugDisk
    return vm.hotplugDisk(params)
  File "/usr/share/vdsm/API.py", line 423, in hotplugDisk
    return curVm.hotplugDisk(params)
  File "/usr/share/vdsm/libvirtvm.py", line 1760, in hotplugDisk
    diskParams['path'] = self.cif.prepareVolumePath(diskParams)
  File "/usr/share/vdsm/clientIF.py", line 291, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'readonly': 'false', 'deviceId': '821dbcc6-9ceb-4e40-abcf-76170241960b', 'propagateErrors': 'off', 'device': 'disk', 'shared': 'false', 'GUID': '1ce110704-604d-40c2', 'optional': 'false'}


This is not enough time for multipath to create the /dev/mapper entry for the lun on the host the vm is running on.

Comment 7 Yeela Kaplan 2013-05-08 08:49:53 UTC
The patch is merged upstream.

Comment 9 Katarzyna Jachim 2013-05-15 10:40:24 UTC
I've run the test 5 times on sf17 and it passed all times. I have no idea how to test it better.

Comment 11 errata-xmlrpc 2013-06-10 20:46:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0886.html

Comment 12 Nir Soffer 2013-12-17 17:51:48 UTC
This bug was never fixed becasue a bug in the patch that should have fixed this.


Note You need to log in before you can comment on or make changes to this bug.