Bug 1066838 - [engine-webadmin] uninformative error message when cloning a VM from snapshot which one of its disks is located on an inactive SD
Summary: [engine-webadmin] uninformative error message when cloning a VM from snapshot...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-webadmin-portal
Version: 3.4.0
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.4.0
Assignee: Maor
QA Contact: Elad
URL:
Whiteboard: storage
Depends On:
Blocks: rhev3.4beta 1142926
TreeView+ depends on / blocked
 
Reported: 2014-02-19 08:16 UTC by Elad
Modified: 2016-02-10 18:56 UTC (History)
10 users (show)

Fixed In Version: ovirt-engine-3.4.0_av2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
engine.log and screenshot (75.82 KB, application/x-gzip)
2014-02-19 08:16 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 24840 0 None None None Never
oVirt gerrit 24846 0 None MERGED core: Fix CDA message to include status Never

Description Elad 2014-02-19 08:16:01 UTC
Created attachment 864999 [details]
engine.log and screenshot

Description of problem:
I have a VM with snapshot that has 2 disks attached from different storage domains which one of them is inactive. I tried to clone a new VM from the snapshot so the new cloned VM's disks would be located on the active domain. Instead of getting an error message which notify that it cannot be done because one of the domain is inactive, I got this message:

	
Operation Canceled
Error while executing action:
22:
Cannot add VM. The relevant Storage Domain's status is ${status}. Maintenance



=========

CDA message:

2014-02-17 17:51:43,067 WARN  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (ajp-/127.0.0.1:8702-8) CanDoAction of action AddVmFromSnapshot failed. Reasons:VAR__ACTION__ADD,VAR__TYPE__VM,ACTION_TYPE_FAILED_
STORAGE_DOMAIN_STATUS_ILLEGAL2,Maintenance

Version-Release number of selected component (if applicable):
rhevm-3.4.0-0.2.master.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
Shared DC with 2 storage domains
1. Create a VM with 2 disks located on different domains
2. Create a snapshot to the VM
3. Put one of the storage domains to maintenance
4. Clone a new VM from the first VM's snapshot. pick the active domain for both disks to be created on in 'Resource allocation' tab in VM cloning dialog.

Actual results:
User gets an inappropriate error message.

Expected results:
User should get an informative message that will notify that one of the domains which one of the original VM's disks located on is inactive

Additional info:
engine.log and screenshot

Comment 1 Elad 2014-03-12 12:55:33 UTC
Checked according to verification steps in comment #0:
When a storage domain which holds one of the VM's snapshot's disks is in maintenance, cloning a VM from the snapshot fails with an appropriate error message:

"cloned:
cannot add VM. The relevant Storage Domain's status is Maintenance."

Verified on RHEVM3.4 - AV2.1

Comment 2 Itamar Heim 2014-06-12 14:08:58 UTC
Closing as part of 3.4.0


Note You need to log in before you can comment on or make changes to this bug.