Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 923824

Summary: engine: [UX] when selecting several disks for live storage migration from the disks tab and one of the disks is attached to a vm which is not yet up we will fail the entire migration operation (for all disks)
Product: Red Hat Enterprise Virtualization Manager Reporter: Dafna Ron <dron>
Component: ovirt-engineAssignee: Daniel Erez <derez>
Status: CLOSED CURRENTRELEASE QA Contact: Elad <ebenahar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: abaron, acanan, acathrow, amureini, iheim, jkt, lpeer, Rhev-m-bugs, scohen, yeylon
Target Milestone: ---Flags: abaron: Triaged+
Target Release: 3.3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: is18 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
log none

Description Dafna Ron 2013-03-20 14:26:42 UTC
Created attachment 713258 [details]
log

Description of problem:

I have two vm's and one was powering up while the other was up. 
I went to the disks tab and selected all the disks from both vm's -> move
engine is giving a CanDoAction and failing all the disks migration. 

2013-03-20 16:11:21,404 WARN  [org.ovirt.engine.core.bll.MoveDisksCommand] (ajp-/127.0.0.1:8702-9) [65f96015] CanDoAction of action MoveDisks failed. Reasons:VAR__ACTION__MOVE,VAR__TYPE__VM_DISK,$VmName NEW1,ACTION_TYPE_FAILED_VM_IS_NOT_DOWN_OR_UP

Version-Release number of selected component (if applicable):

sf10

How reproducible:

100%

Steps to Reproduce:
1. create two vms
2. run the first vm and wait until its up
3. run the second vm
4. go to the disks tab -> select all the disks -> move
  
Actual results:

engine is failing the entire live disk migration because of one vm which is powering up

Expected results:

we should fail the move for the specific vm and allow the migration for the rest of the disks 

Additional info:logs

I think this bug is a user experiance bug since the vm status is not shown in the disks tab and we do not allow live migration of all disks from the vm's tab. so failing the entire move because of one vm which was rebooted is not really user fiendly.

Comment 1 Ayal Baron 2013-03-27 09:55:38 UTC
Need to consider whether we move the canDoAction logic to the UI/RestAPI and solve this in 3.3 or wait for single disk snapshot (and then the problem just won't be relevant anymore)

Comment 3 Daniel Erez 2013-09-17 20:47:41 UTC
Resolved as part of bug 987783.

Comment 4 Elad 2013-10-27 16:59:06 UTC
I'm able to live migrate disks while at the same time offline movement is taking place (as described in steps).

Verified on is20, block pool.
rhevm-3.3.0-0.28.beta1.el6ev.noarch

Comment 5 Itamar Heim 2014-01-21 22:32:04 UTC
Closing - RHEV 3.3 Released

Comment 6 Itamar Heim 2014-01-21 22:32:07 UTC
Closing - RHEV 3.3 Released