Bug 969767 - engine: unexpected exception error when trying to hotplug a disk when its domain is in maintenance
Summary: engine: unexpected exception error when trying to hotplug a disk when its dom...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.2.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.3.0
Assignee: Maor
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-02 09:34 UTC by Dafna Ron
Modified: 2016-02-10 20:39 UTC (History)
10 users (show)

Fixed In Version: is9
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
oVirt Team: Storage
Target Upstream Version:
Embargoed:
scohen: needinfo+
scohen: needinfo+


Attachments (Terms of Use)
logs (1.35 MB, application/x-gzip)
2013-06-02 09:34 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 17170 0 None None None Never

Description Dafna Ron 2013-06-02 09:34:35 UTC
Created attachment 755790 [details]
logs

Description of problem:

I have a floating disk on a vm which is in maintenance. 
I ran a vm and tried to hotPlug the disk. 
the engine sends the command to vdsm which returns with Bad volume specification error and engine shows the user the following error: 

Error while executing action Attach Disk to VM: Unexpected exception

engine log error: 

2013-06-02 12:22:39,868 ERROR [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (ajp-/127.0.0.1:8702-9) [3db9b428] Command org.ovirt.engine.core.bll.AttachDiskToVmCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception

vdsm error:

Thread-94007::ERROR::2013-06-02 12:22:43,953::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': "Volume does not exist: ('8e893a47-ec05-438d-a774-93f6e8a9da9a',)", 'code': 201}}
Thread-94007::ERROR::2013-06-02 12:22:43,953::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 251, in vmHotplugDisk
    return vm.hotplugDisk(params)
  File "/usr/share/vdsm/API.py", line 424, in hotplugDisk
    return curVm.hotplugDisk(params)
  File "/usr/share/vdsm/libvirtvm.py", line 1781, in hotplugDisk
    diskParams['path'] = self.cif.prepareVolumePath(diskParams)
  File "/usr/share/vdsm/clientIF.py", line 275, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'cow', 'optional': 'false', 'volumeID': '8e893a47-ec05-438d-a774-93f6e8a9da9a', 'imageID': '201cf246-bf3e-49cb-b8df-18b81c27229b', 'readonly': 'false', 'domainID': '741

Version-Release number of selected component (if applicable):

sf17.2

How reproducible:

100%

Steps to Reproduce:
1. in iscsi storage with two hosts and two domains, create a vm with one disk and run it
2. create a floating disk
3. when the vm is up, put the domain in which the floating disk is on in maintenance. 
4. when the domain is in maintenance try to attach the floating disk to the vm

Actual results:

user gets the following error: 
Error while executing action Attach Disk to VM: Unexpected exception


Expected results:

engine should check status of domain before sending the hotPlug command to vdsm and if we decide not to do that we should at least create a better error for the user. 

Additional info: logs

Comment 2 Maor 2013-07-24 08:58:29 UTC
Both operations plugged disk and attach disk should be fixed here.

on attach disk, the dialog box, has also a checkbox in the bottom of it, which indicate if the user want to also plug the disk immediately after it was attached.
if the storage domain is inactive the plug operation will fail.

My question is, should we also fail the attach operation as well (since we can know that the plug will fail), or we should let the user at least attach the disk to the VM but keep it unplugged and add a warning audit log that the storage domain is inactive?

Comment 3 Sean Cohen 2013-07-24 09:51:05 UTC
(In reply to Maor from comment #2)

We should let the user at least attach the disk to the VM but keep it unplugged and add a warning audit log that the storage domain is inactive.

Comment 4 Allon Mureinik 2013-07-24 10:45:22 UTC
Sean, Maor, this is a disgusting user experience.

We can have this validation BEFORE the user submits the request - instead of giving only half the functionality, we should FAIL it with an error message that plugging would fail, but the user can choose to attach without plugging.

Why is this not preferable?

Comment 5 Sean Cohen 2013-07-24 10:53:58 UTC
As long as the user can choose to attach without plugging, I am fine with failing it with an error message.

Comment 6 Allon Mureinik 2013-07-24 14:33:34 UTC
(In reply to Sean Cohen from comment #5)
> As long as the user can choose to attach without plugging, I am fine with
> failing it with an error message.
Of course there is such a functionality.
Thanks.

Comment 7 vvyazmin@redhat.com 2013-09-30 08:48:38 UTC
Verified, tested on RHEVM 3.3 - IS15 environment:

Host OS: RHEL 6.5

RHEVM:  rhevm-3.3.0-0.22.master.el6ev.noarch
PythonSDK:  rhevm-sdk-python-3.3.0.14-1.el6ev.noarch
VDSM:  vdsm-4.12.0-138.gitab256be.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-24.el6.x86_64
QEMU & KVM:  qemu-kvm-rhev-0.12.1.2-2.402.el6.x86_64
SANLOCK:  sanlock-2.8-1.el6.x86_64

------------------------------------------------------------------------
Error while executing action:

vm_0001:

    Cannot attach Virtual Machine Disk. The relevant Storage Domain is inaccessible.
    -Please handle Storage Domain issues and retry the operation.
------------------------------------------------------------------------

Comment 8 Itamar Heim 2014-01-21 22:23:10 UTC
Closing - RHEV 3.3 Released

Comment 9 Itamar Heim 2014-01-21 22:24:10 UTC
Closing - RHEV 3.3 Released

Comment 10 Itamar Heim 2014-01-21 22:27:55 UTC
Closing - RHEV 3.3 Released


Note You need to log in before you can comment on or make changes to this bug.