Bug 969767 - engine: unexpected exception error when trying to hotplug a disk when its domain is in maintenance
engine: unexpected exception error when trying to hotplug a disk when its dom...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.2.0
x86_64 Linux
unspecified Severity medium
: ---
: 3.3.0
Assigned To: Maor
Aharon Canan
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-02 05:34 EDT by Dafna Ron
Modified: 2016-02-10 15:39 EST (History)
10 users (show)

See Also:
Fixed In Version: is9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: needinfo+
scohen: needinfo+


Attachments (Terms of Use)
logs (1.35 MB, application/x-gzip)
2013-06-02 05:34 EDT, Dafna Ron
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 17170 None None None Never

  None (edit)
Description Dafna Ron 2013-06-02 05:34:35 EDT
Created attachment 755790 [details]
logs

Description of problem:

I have a floating disk on a vm which is in maintenance. 
I ran a vm and tried to hotPlug the disk. 
the engine sends the command to vdsm which returns with Bad volume specification error and engine shows the user the following error: 

Error while executing action Attach Disk to VM: Unexpected exception

engine log error: 

2013-06-02 12:22:39,868 ERROR [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (ajp-/127.0.0.1:8702-9) [3db9b428] Command org.ovirt.engine.core.bll.AttachDiskToVmCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception

vdsm error:

Thread-94007::ERROR::2013-06-02 12:22:43,953::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': "Volume does not exist: ('8e893a47-ec05-438d-a774-93f6e8a9da9a',)", 'code': 201}}
Thread-94007::ERROR::2013-06-02 12:22:43,953::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 251, in vmHotplugDisk
    return vm.hotplugDisk(params)
  File "/usr/share/vdsm/API.py", line 424, in hotplugDisk
    return curVm.hotplugDisk(params)
  File "/usr/share/vdsm/libvirtvm.py", line 1781, in hotplugDisk
    diskParams['path'] = self.cif.prepareVolumePath(diskParams)
  File "/usr/share/vdsm/clientIF.py", line 275, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'cow', 'optional': 'false', 'volumeID': '8e893a47-ec05-438d-a774-93f6e8a9da9a', 'imageID': '201cf246-bf3e-49cb-b8df-18b81c27229b', 'readonly': 'false', 'domainID': '741

Version-Release number of selected component (if applicable):

sf17.2

How reproducible:

100%

Steps to Reproduce:
1. in iscsi storage with two hosts and two domains, create a vm with one disk and run it
2. create a floating disk
3. when the vm is up, put the domain in which the floating disk is on in maintenance. 
4. when the domain is in maintenance try to attach the floating disk to the vm

Actual results:

user gets the following error: 
Error while executing action Attach Disk to VM: Unexpected exception


Expected results:

engine should check status of domain before sending the hotPlug command to vdsm and if we decide not to do that we should at least create a better error for the user. 

Additional info: logs
Comment 2 Maor 2013-07-24 04:58:29 EDT
Both operations plugged disk and attach disk should be fixed here.

on attach disk, the dialog box, has also a checkbox in the bottom of it, which indicate if the user want to also plug the disk immediately after it was attached.
if the storage domain is inactive the plug operation will fail.

My question is, should we also fail the attach operation as well (since we can know that the plug will fail), or we should let the user at least attach the disk to the VM but keep it unplugged and add a warning audit log that the storage domain is inactive?
Comment 3 Sean Cohen 2013-07-24 05:51:05 EDT
(In reply to Maor from comment #2)

We should let the user at least attach the disk to the VM but keep it unplugged and add a warning audit log that the storage domain is inactive.
Comment 4 Allon Mureinik 2013-07-24 06:45:22 EDT
Sean, Maor, this is a disgusting user experience.

We can have this validation BEFORE the user submits the request - instead of giving only half the functionality, we should FAIL it with an error message that plugging would fail, but the user can choose to attach without plugging.

Why is this not preferable?
Comment 5 Sean Cohen 2013-07-24 06:53:58 EDT
As long as the user can choose to attach without plugging, I am fine with failing it with an error message.
Comment 6 Allon Mureinik 2013-07-24 10:33:34 EDT
(In reply to Sean Cohen from comment #5)
> As long as the user can choose to attach without plugging, I am fine with
> failing it with an error message.
Of course there is such a functionality.
Thanks.
Comment 7 vvyazmin@redhat.com 2013-09-30 04:48:38 EDT
Verified, tested on RHEVM 3.3 - IS15 environment:

Host OS: RHEL 6.5

RHEVM:  rhevm-3.3.0-0.22.master.el6ev.noarch
PythonSDK:  rhevm-sdk-python-3.3.0.14-1.el6ev.noarch
VDSM:  vdsm-4.12.0-138.gitab256be.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-24.el6.x86_64
QEMU & KVM:  qemu-kvm-rhev-0.12.1.2-2.402.el6.x86_64
SANLOCK:  sanlock-2.8-1.el6.x86_64

------------------------------------------------------------------------
Error while executing action:

vm_0001:

    Cannot attach Virtual Machine Disk. The relevant Storage Domain is inaccessible.
    -Please handle Storage Domain issues and retry the operation.
------------------------------------------------------------------------
Comment 8 Itamar Heim 2014-01-21 17:23:10 EST
Closing - RHEV 3.3 Released
Comment 9 Itamar Heim 2014-01-21 17:24:10 EST
Closing - RHEV 3.3 Released
Comment 10 Itamar Heim 2014-01-21 17:27:55 EST
Closing - RHEV 3.3 Released

Note You need to log in before you can comment on or make changes to this bug.